cynthyap 110000GC4C Tags:  cloud-computing cloudops provisioning cloud cloud-monitoring devops cloud_computing smartcloud smartcloudprovisioning 7,774 Views
DevOps has become something of a buzzword lately but the idea behind it can be truly powerful. Using a combination of technology and best practices to increase collaboration between development and operations teams can accelerate the application development lifecycle while improving software quality and reducing costs.
For many, the development process has become more complex and segregated from operations. Factors such as inefficient communications, manual processes and poor visibility into the deployment process result in production bottlenecks as well as subpar quality throughout the development and delivery cycle.
To address these challenges, organizations have often turned to adhoc and siloed efforts. And so gaps still exist due to lack of integration across people, processes and tools. The reality is that an effective DevOps solution requires an integrated approach of continuous delivery that optimizes and accelerates the application lifecycle in every phase: development, testing, staging and production.
What this means is that changes made in development are continuously built, integrated and tested for function, performance, systems verifications, user acceptance, and then staged, ready for production. And it can all be brought together through an integration framework that can automate the individual tasks across the various stages of the pipeline and continuously deliver changes, providing end-to-end lifecycle management. Continuous automation is necessary in the following key areas:
• Continuous integration provides faster validation and delivery of code changes via automated, repeatable execution of build processes with continuous feedback
• Continuous deployment provides on-demand environment configuration and the ability to continuously deploy code and configuration middleware.
• Continuous testing automates testing in production-like environments.
• Continuous monitoring increases visibility into application performance and provides data to trace and isolate product defects.
With an automated process for moving application changes through progressively richer test environments that mirror the production environment, chances for error and roll back are greatly reduced.
The result is increased visibility into the delivery pipeline, standardized communication between Dev and Ops and more efficient and accurate delivery of software projects. And the delivery process can scale dynamically as business needs grow.
Here’s how IBM is addressing DevOps, with the launch of SmartCloud Continuous Delivery--an agile, scalable and flexible solution for end-to-end lifecycle management that allows organizations to reduce software delivery cycle times and improve quality. SmartCloud Continuous Delivery is also available on Jazz.net.
At times, you may see out of meomory condition in systemout.log (/opt/IBM/WebSphere/AppServer/profiles/ctgAppSrv01/logs/MXServer) during normal processing of your TSAM (Tivoli Service Automation Manager) processing:
[10/23/12 1:03:52:000 EDT] 00000f80 SystemOut O [maximo-ESCALATION.ESCPMZHBCCOWFE][ERROR][SR ] BMXAA4160E - A major exception has occurred.
psdi.util.MXSystemException: BMXAA4160E - A major exception has occurred.
at java.lang.Object.clone(Native Method)
To resolve these types of out of memory issues, be sure the Heap Size is set to the recommended values. You can follow these steps to verify the settings and correct them:
- Logon to WAS Admin Console as wasadmin user
- Navegate to Application Servers > MXServer > Java and Process Management > Process Definition > Java Virtual Machine
The values of interest are: "Initial Heap Size" and "Maximum Heap Size".
If not already set, change:
initialHeapSize="512" to initialHeapSize="4096"
maximumHeapSize="1024" to maximumHeapSize="6144"
- Restart MXServer
DRussell4881 12000070EV 3,527 Views
The Cloud Standards Customer Council has recently released a white paper on Security in the Cloud.
Reading through the paper this section highlights the challenges to be addressed: “As consumers transition their applications and data to use cloud computing, it is critically important that the level of security provided in the cloud environment be equal to or better than the security provided by their traditional IT environment. Failure to ensure appropriate security protection could ultimately result in higher costs and potential loss of business thus eliminating any of the potential benefits of cloud computing.”
The paper calls out 10 steps one must take to ensure that security in the Cloud is aligned with one’s business requirements.
Free access to the white paper is at http://www.cloud-council.org/security.htm .
DRussell4881 12000070EV 3,541 Views
An excellent article has been posted on Wired.com: http://insights.wired.com/profiles/blogs/collaboration-in-action-weaving-proven-tech-into-openstack#ixzz29gZqCdKO
The article talks about OpenStack, an emerging open source community focused on industry collaboration, and its development of a cloud infrastructure implementation that has achieved significant market acceptance and growth. Leading vendors across several industries are joining forces are developing a common code base that will enable 'state of the art' cloud infrastructure solutions.
In another OpenStack activity, Jonathan Bryce, Executive Director of the OpenStack Foundation, will be be talking to the Cloud Standards Customer Council ( http://www.cloud-council.org/ ) on the organization's mission and direction. You can hear Jonathan on Wednesday, October 24th by registering for the talk at http://bit.ly/RKhhKH
Was out for a few drinks last week with friends from the local tech community where we try to solve the world’s problems. We were enjoying sitting in the sunshine and warm weather on the patio of a local brewery discussing the interesting topics of the week. Before we got into the presidential election the talk turned to virtualization, of course, and the current directions their companies are taking. Everyone has heard about the pricing actions from VMware causing major concerns from their customers, but these topics to me always seem some what abstract until you hear about people taking action. Well, it turns out the guy who works at a large tech company in Austin was actually in the process of installing KVM to replace VMware in his area of the company due to pricing issues. He went on to say that a CIO friend of his at another company was in the process of doing the same thing. That’s the thing about IT people, once you upset them they can take decisive action to fix the problem and they tend to have very long memories. I also came across this CNET article sub-titled “Market research firm IDC says that data from a new survey shows that "open cloud is key for 72 percent of customers." Clearly there seems to be a pricing level that makes open cloud solutions look very attractive. As more companies try to balance the need for virtualization and cloud with the costs of the solutions, I think the open cloud will become more attractive.
IBM is working with OpenStack which is a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds. The OpenStack Summit, which is sold out, is going on this week (October 15th) in San Diego, you can read about it and other topics at the OpenStack blog. There are also some great insights on the topic at the Linux Foundation blog from IBM’s Angel Diaz: 3 Projects creating user-driven standards for the open cloud . I think this is definitely a space worth watching…stay tuned for more updates from the patio…
Two new additional reports are now available for IBM SmartCloud Cost Management to help you use the existing reports. These reports allow you to enter your parameter choices once, then call any of the other reports directly without having to re-enter the parameters.
The Navigation – Charge report allows you to run any of the reports that show charge information. These have been grouped to allow you to see the reports showing data across accounts, across rates and across time:
These reports can be downloaded from the ISM Library here.
Suppose you want to restrict users or clients to access only a subset of the virtual machines in your virtualized environment. This restriction can be imposed at the report level for the ITMfVE reports using Tivoli Common Reporting and Cognos.[Continue Reading]
cynthyap 110000GC4C Tags:  virtualization cloud-computing virtual_image_library virtual_image provisioning virtual_image_consolidati... cloud 6,561 Views
It’s been estimated that the number of virtual machines in data centers has increased at least tenfold in the last decade. More than fifty percent of virtualized environments now have more than one brand of hypervisor. The hypervisor promise of cutting infrastructure expense has given way to increases in licensing costs of more than three hundred percent. And the average number of images destroyed? Nobody knows.
In short, the challenges of managing virtualized environments are mounting. The benefits of virtualization—from cost and labor savings to increased efficiency—are being threatened by its staggering growth and the resultant complexity.
A critical piece to solving these challenges, as many organizations have already discovered, is image management. While there are many ad hoc and isolated solutions, there continues to be a real need for comprehensive image lifecycle management to combat image sprawl, get more visibility and analysis into where images are stored and how they are being used, and to ensure security through timely patching of images. This doesn’t necessarily mean jumping to cloud solutions, especially for businesses that aren’t ready to adopt cloud orchestration yet, but rather, implementing image capabilities in the virtualized environment that are robust enough to help with high-value applications and the on-ramp to advanced cloud capabilities.
Because images are easy to create and copy, it’s often difficult to decipher which images are crucial, where there is redundancy and where there may be a need for more governance. It is also an ongoing challenge to understand what an image consists of without launching it. This image complexity has resulted in IT spending a significant portion of their time on mundane or repetitive tasks such as manually building images and maintaining an image library.
Inserting automation best practices into the process of creating, deploying and managing images can result in immediate time and labor savings, with as much as 40-80% labor cost reduction by increasing image/admin ratio efficiency. Automation also helps to optimize the efficiency and accuracy of service delivery in the data center.
Once images are captured they can be deployed as often as needed. Paired with robust, automated, high-scale provisioning, hundreds of new virtual machines can be deployed in minutes, increasing IT efficiency. They can also be customized based on user needs.
Key to effective image analysis (including image search, drift, version control and image vulnerability) is the use of a federated image library, which pulls together the storage and meta information of images across multiple image repositories and hypervisors.
Image search: With a large amount of image information to contain and understand, it can become difficult to determine the connection between images or their origin. A family-tree hierarchy and grouping of images with version chains simplifies image search by showing how images are linked, when they are in use and where they originated, even in a mixed hypervisor environment. Additionally, searching capabilities within images drastically reduces the complexity of finding the right image and associated information about it.
Image drift: Varying image iterations make it difficult to manage compliance and version control. Frequently, administrators are forced to maintain volumes of duplicate and unnecessary images because it is difficult to ascertain the need, use or ownership of images. Advanced image management can increase visibility into what is inside a virtual machine through a centralized image library, to determine opportunities to consolidate images, or determine if there are security threats from vulnerable images.
With the explosion of images to govern, there is a need to be able to detect vulnerability exposures in images to ensure that no virtual machines are created without the proper level of security patches. All systems, both physical and virtual, need to be patched whether they are distributed or part of the cloud. A simplified, automated patching process can administer virtual images from a single console so you have the scalability to patch as quickly as you can provision, allowing users to maintain golden and copied images in a patched state. With this patching capability, policy enforcement can be accomplished and proven in minutes instead of days, and IT can increase the accuracy and speed of patching enforcement, achieving as much as 98% first pass patch success rate in hours.
The benefits of a comprehensive, integrated image management solution are immediately obvious. Best of all, there is a high degree of reward with very little risk.
And with image sprawl under control, organizations can expand capabilities for richer end-to-end service management across the virtualized infrastructure such as performance management and data protection as well as look to higher value cloud capabilities for faster service delivery.
For more information, here’s an in-depth look at image management as well as trial code to test out image lifecycle management capabilities.
Earlier today, IBM shared its point of view on the future of the data center with
Because IBM's cloud solutions are based on an open standards approach and common tools, optimized for diverse workload needs, organizations can leverage Enterprise Systems within their existing IT environments while protecting existing investments and minimizing the need to retrain or hire specialized skills. And offerings such as