New video released titled:
Creating a volume from the SmartCloud Provisioning user interface and attaching it to an image
See all Tivoli IEA video content here:
dplantz 1000003W46 2.870 Visualizações
New video released titled:
See all Tivoli IEA video content here:
dplantz 1000003W46 3.073 Visualizações
These are in the Administration category
See all Tivoli IEA video content here:
dcosenti 0600006FQ7 3.212 Visualizações
As of today, a security fix has been released for SmartCloud Provisioning 2.1 (SCP2.1) on IBM FixCentral, in the form of an Interim Fix (ifix).
The ifix requires SCP 2.1 FixPack 3 to be applied.
Once this ifix is applied the product security will be hardened, since the following issue will be solved:
Any user, even with READ-ONLY access to everything, has the ability to start/stop/DELETE ANY virtual system using the following CLI command:
The ifix can be downloaded from:
- IBM SmartCloud Provisioning
Then "browse for fixes" or look for APAR ZZ00134
The fix consists of the two files:
dplantz 1000003W46 3.705 Visualizações
New flash training modules released for v 2.1
See all Tivoli IEA content here:
lcflatley 270006EHEQ 3.741 Visualizações
This webcast will take place on Tuesday, December 3rd, 2013 at 11:00 AM ET, US
IT users are presented with many issues from their Data Center Network including the need to increase bandwidth, quickly provision and reduce network operational costs. Software Defined Networking (SDN) provides a programmable and automated interface to help tackle those issues. IBM SmartCloud Orchestrator can help you manage and deploy cloud services including IBM and Juniper’s SDN solutions to help automate and optimize your Data Center Network.
Dilip Kandlur, Distinguished Engineer, Chief Architect of Network Management (IBM)
Chris Rogers, Director, IBM Alliance (Juniper Networks)
DRussell4881 12000070EV 4.119 Visualizações
The trend in programming today is towards greater diversity in datastores that can be applied to a broad set of applications. Developers and Data Architects require the ability to not only work with traditional relational databases but also with document based databases
ShawnJaques 1200007FSY 3.824 Visualizações
The OpenStack summit is going global with the first design summit being held outside of the United States in Hong Kong, November 5-8. In addition, the event is going beyond the primary focus of designing the next release to include a much bigger focus on users. I think that this signals a pretty significant shift from a somewhat interesting but niche technology, to a much more broadly appealing, world ready, foundation for cloud.
Note that I didn’t call OpenStack a cloud management platform. In my mind at least, that requires advanced capabilities such as monitoring, metering and billing, multi-tier application deployment and process workflow that either aren’t in OpenStack or aren’t mature. That doesn’t mean that you shouldn’t consider OpenStack right now. In fact, one of the great things about OpenStack is the open APIs that allow you to control the heterogeneous storage, hypervisor, compute, and network resources programmatically and create your own solution.
There are two primary reasons that clients are using OpenStack today. The first is time to market. Because OpenStack is relatively easy to deploy and has open APIs that control a broad range of hypervisor, storage, compute and network devices, organizations are able to get up and running very quickly across the devices that they have in their data center today. The second driver for OpenStack usage has been cost. Organizations can choose to run workloads on cheaper hardware, storage, network and hypervisors because OpenStack supports those devices in a very scalable and secure way.
IBM is a founding and Platinum member of the OpenStack Foundation and is also a Headline sponsor of the Hong Kong summit. I’ll be there, working the booth and demonstrating how IBM products built on OpenStack not only inherit the interoperability and heterogeneous support of OpenStack but also fill in gaps to create a fully functional and supported Cloud Management Platform.
I hope that you will consider joining OpenStack (www.openstack.org/join It’s free!) and trying out the code (also free!). I’d also like to encourage you to go the Hong Kong Summit (www.openstack.org not free, but relatively cheap). It’s going to be the biggest OpenStack event ever with close to 5,000 participants. There will be plenty of opportunities to learn about using OpenStack or even to participate in design discussions and influence the direction of the next release.
lcflatley 270006EHEQ 3.263 Visualizações
lcflatley 270006EHEQ 3.245 Visualizações
For all the technical folks who always want themselves to be updated on the technologies that matter to today's business, here is a opportunity to get updated on cutting-edge technologies by IBM. You will get a chance to get upgraded on different technologies in all areas and have a close interaction with IBM Executives and Senior Technical members.
jhkeenan 270001K41A 3.850 Visualizações
Following on from the release of SmartCloud Orchestrator v2.2 in May 2013, development are happy to announce the general availability of SCO 2.2 Functional Fix Pack 1.
Please navigate to Fix Central to download the Fix Pack (2.2.0-CSI-ISCO-FP0001).
As well as APAR fixes and general improvements in performance and security, the enhancements included in the Fix Pack include:
This enables you to manage from SCO any existing customer deployed OpenStack installations that use the open source or a third party version of OpenStack software.
Support for enabling enterprise customer to leverage the existing users directory service and give access to SCO self service and administration UI complying to the internal authentication and security standards.
The pattern definition can be configured for the VM deployed additional storage volumes. This is a critical enhancement for supporting patterns that include database middleware that requires additional storage resources for storing data.
Additional capacity (cpu memory and disk) can be added or removed to/from already deployed VMs to address changes in the workload requirements.
crosen 060001U4XN 4.247 Visualizações
C&SI IT, the leader in providing enterprise cloud infrastructure, is announcing the General Availability of the next major evolution in Cloud deployment; the Continuous Delivery Cloud powered by SmartCloud Orchestrator (SCO) 2.2.
Much of today's use of the C&SI IT virtual infrastructure is classified as enterprise virtualization. C&SI has achieved outstanding capital and expense cost avoidance using this infrastructure. Moving ahead and truly embracing cloud requires a cultural change as well as a technological change. C&SI IT is leading that change by deploying IBM SCO and IBM Continuous Delivery products to enable development, test, and support teams with the tools required for DevOps and dynamic cloud use.
The new Continuous Delivery Cloud environment enables teams to create and deploy production-like environments in minutes...not weeks or months! Based on IBM's SmartCloud Foundation, this environment provides numerous new features:
The infrastructure is running on nine IBM x3850s with XIV SAN with a capacity of approximately 2000 virtual machines. The installation proved quite easy and robust after locating adequate physical hardware to host the SCO services. Part of the install creates four service VMs running the various components of SCO: Virtual Image Library, Image Construction and Composition Tool, Business Process Manager, and DB2. One configuration issue that I encountered was defining the network with the correct size even though the subnet mask was provided.
Another issue that I ran into was related to quotas and it is important to verify user (nova quota-show id) and project quotas (nova-manage project quota id).
I also opted to recreate my “flavors” to something that was more meaningful to me. Within SCO, the flavor determines the hardware resources for the deployed VM.
The environment is currently onboarding various test, development, and support teams. Stay tuned for user feedback on the new cloud environment.
The project background can be found in the original blog post at https://www.ibm.com/developerworks/community/blogs/9e696bfa-94af-4f5a-ab50-c955cca76fd0/entry/tivoli_it_transformation_drinking_our_own_champagne?lang=en.
Additional information can be found in the Cloud Provisioning and Orchestration community (https://www.ibm.com/developerworks/mydeveloperworks/groups/service/html/communityview?communityUuid=e5a54efe-3c9f-491b-af2a-e5400516b5aal).
Quick design and deployment of complex enterprise applications using "Deployment Planning and Automation for Cloud"
Today with multiple Cloud solutions there are awesome improvements in deployment of enterprise applications on Cloud. There are tools available to design these applications in a convenient manner. You can just drag and drop components, pick them and build them in numbers as needed. Though deployment and designer tools work perfectly individually, there still exists a big gap as they work in silos. This traditional manner of designing topologies of applications do not fit well when they are actually deployed in Cloud. Though deployment of enterprise applications is automated, a manual process is not automated which needed to take the topologies/patterns designed from designer tools on a solution which can deploy them in an automated manner. This is the area where lies the real problem and does not help user to take the full advantage of technology.
I am going to talk about a tool (Deployment, Planning and Automation for Cloud) which helps bridge this gap and makes a simplistic approach to application design construction and deployment with the optimum time to deploy and build complex applications.
Deployment Planning and Automation (DP&A) Cloud Accelerator is the integrated solution from Rational And Tivoli group of products
This integrated solution helps you to manage your environment for greater resource sharing by allowing you to plan the applications deployment patterns. It not only makes design of applications deployment patterns easy but also their deployment into production environments in an automated way. Automated Deployment includes deploy virtual servers, install and configure enterprise middleware and applications in a single automation workflow. This workflow is generated from a visual model of your deployment environment. You can also govern and share application artifacts, standard templates, and deployment plans between development and operations teams and trace development artifacts to deployed instances to support change management.
Figure below shows an end-to-end automation of all the deployment steps using DP&A for the cloud. Deploying a typical standard Java Enterprise Edition (JEE) application in a virtual environment could potentially require hundreds, if not thousands, of parameter specifications (for which often default values are used for simplicity). DP&A for the cloud allows all such parameters to be represented and controlled in a single solution topology in RSA. TSAM and RAFW automation generated from such a topology can complete an end to end deployment on a VMware ESX server in about 45 minutes starting only from a base Linux or AIX virtual image.
The DP&A for the cloud integration asset lets Solution Architects to use a solution modeling tool: Rational Software Architect: (RSA), to generate workflows for multiple deployment engines (Tivoli Service Automation Manager (TSAM) and Rational Automation Framework for WebSphere (RAFW)) for end-to-end solution deployment. The generated workflows include, in the same flow, TSAM steps for provisioning VMware virtual machines and/or System p LPARs and installing middleware and RAFW steps for configuring middleware and installing applications.
Deployment Planning and Automation for Cloud had its first release (v 2.0.0) and available for use.
An article is also available which talks about how it helps to minimise the time to deploy composite applications using DP&A for Cloud.
The best example of real-time scenarios which Customer of this tool are using :
An architect designs a topology of their enterprise applications which could be single-tier, dual-tier or multi-tier composed of web servers (like WebSphere) and database servers (like DB2, SQL or Oracle). He can also specify topology specific characteristics in the design. Topology created is exported now creating 2 components, one as Cloud Service Archive representing the topology and an automated steps for RAFW to deploy enterprise applications.
For example : Applications will be deployed on Linux Server with English localization and in DMZ network with installation type as “secure”. Cloud Deployment tool (TSAM) finds a VMware cluster available at deployment time to deploy the servers with the corresponding characteristics mentioned above.
An EAR of the applications can also be designed and called for the deployment on the topologies created with the designer tool (RSA).
Furthermore, deployed patterns can be analysed for their performance. It can help to update the topology with the improvements and deploy them back on the Cloud Environment.
For example : A topology with some number of web servers and database servers is deployed and analysed for its performance. Team find out that, there is a delay in response for one of the applications. For better load balancing, architect adds another web server in the topology and exports it in Cloud Service Archive.
This updated topology is now available for quick deployment as well as can be easily transferred to another environments for the deployment.
Widgets interact with each other using events which is enabled through a publication and subscription mechanism in the dashboard.. To achieve interaction, an event information is passed using a publication and subscription model. A widget is capable to publish (or broadcast) certain events that other interested widgets can subscribe to, and widgets that can subscribe to a particular event knows how to react to it. A widget can both publish and subscribe to events supported by it at the same time.This blog covers the details of all the events supported ,and how to enable and disable them for a widget.
Settings related to publishing an event OR subscribing to an event can be configured by going to the "Events" Section of edit menu for any dashboarding widget.
This opens up a view to set eventing related preferences:
Note that based on the capabilities of a widget, event section of widget may provide different options related to eventing mechanism. For Example, browser reload widget will not provide any option in Published Events/Subscribed Events section , whereas refresh timer widget will provide only data refresh event option in the Published events section. Similarly Charting related widgets like Line Chart, Pie Chart, Column Chart will provide an additional option for TimeSet event in the subscribed events section.
In the published events section, checking on NodeClickedOn enables the widget to publish events on every click action on any node. A NodeClickedOn event sends an event based on the condition associated with the node clicked. This event is received by all the widgets subscribed to NodeClickedOn events. Subscription to NodeClickedOn events is achieved by checking on NodeClickedOn in the subscribed event section . Similarly a widget can be made to subscribe to other events like dataRefresh, TimeSet etc by checking on the respective events in the subscribed events section. However a subscribed widget may or may not respond to the NodeClickedOn event based on the conditions sent in the NodeClickedOn event. It means that target widget will take input from the parameters sent in NodeClickedOn event and based on inputs if the condition is not met in dataset of target widget, it may ignore event and will not respond.
An important point to note here that disabling a wire overrides any broadcast event settings and prevents communication between two widgets on the wire. An enabled wire takes precedence over broadcast event settings.
neillrd 110000RMK7 3.661 Visualizações
With the release of OMEGAMON XE on z/VM and Linux 4.3.0 you now have the ability to integrate the management of z/VM with your other hypervisors in the Smart Cloud Monitoring (SCM) Dashboard. The 4.3.0 version, along with enhancements to the traditional OM workspaces, adding support for Single System Image (SSI) and Live Guest Relocation (LGR), also includes a .jar file that will easily let you integrate z/VM into your SCM dash board. As you can see in the following screen capture, the z/VM and Linux Dashboard is integrated seamlessly with other hypervisors, such as VMWare.