cynthyap 110000GC4C Tags:  cloud-computing provisioning cloudops cloud cloud-monitoring devops cloud_computing smartcloud smartcloudprovisioning 7,774 Views
DevOps has become something of a buzzword lately but the idea behind it can be truly powerful. Using a combination of technology and best practices to increase collaboration between development and operations teams can accelerate the application development lifecycle while improving software quality and reducing costs.
For many, the development process has become more complex and segregated from operations. Factors such as inefficient communications, manual processes and poor visibility into the deployment process result in production bottlenecks as well as subpar quality throughout the development and delivery cycle.
To address these challenges, organizations have often turned to adhoc and siloed efforts. And so gaps still exist due to lack of integration across people, processes and tools. The reality is that an effective DevOps solution requires an integrated approach of continuous delivery that optimizes and accelerates the application lifecycle in every phase: development, testing, staging and production.
What this means is that changes made in development are continuously built, integrated and tested for function, performance, systems verifications, user acceptance, and then staged, ready for production. And it can all be brought together through an integration framework that can automate the individual tasks across the various stages of the pipeline and continuously deliver changes, providing end-to-end lifecycle management. Continuous automation is necessary in the following key areas:
• Continuous integration provides faster validation and delivery of code changes via automated, repeatable execution of build processes with continuous feedback
• Continuous deployment provides on-demand environment configuration and the ability to continuously deploy code and configuration middleware.
• Continuous testing automates testing in production-like environments.
• Continuous monitoring increases visibility into application performance and provides data to trace and isolate product defects.
With an automated process for moving application changes through progressively richer test environments that mirror the production environment, chances for error and roll back are greatly reduced.
The result is increased visibility into the delivery pipeline, standardized communication between Dev and Ops and more efficient and accurate delivery of software projects. And the delivery process can scale dynamically as business needs grow.
Here’s how IBM is addressing DevOps, with the launch of SmartCloud Continuous Delivery--an agile, scalable and flexible solution for end-to-end lifecycle management that allows organizations to reduce software delivery cycle times and improve quality. SmartCloud Continuous Delivery is also available on Jazz.net.
cynthyap 110000GC4C Tags:  image_management cloud provisioning usage monitoring virtualization cloud_computing cloud-computing orchestration 2 Comments 28,870 Views
Orchestration can be one of those ambiguous concepts in cloud computing, with varying definitions on when cloud capabilities truly advance into the orchestration realm. Frequently it’s defined simply as automation = orchestration.
But automation is just the starting point for cloud. And as organizations move from managing their virtualized environment, they need to aggregate capabilities for a private cloud to work effectively. The automation of storage, network, performance and provisioning are all aspects handled in most cases by various solutions that have been added on over time as needs increase. Even for organizations that take a transformational approach -- jumping to an advanced cloud to optimize their data centers -- the management of heterogeneous environments with disparate systems can be a challenge not simply addressed by automation alone. As the saying goes, “If you automate a mess, you get an automated mess.”
The need to orchestrate really becomes clear when various aspects of cloud management are brought together. The value to the organization at this stage of cloud is simplifying the management of automation – otherwise a balancing act to manage multiple hypervisors, resource usage, availability, scalability, performance and more -- based on business needs from the cloud, with the ultimate goal of delivering services faster.
With orchestration, the pieces are woven together and can be managed more effectively to ensure smooth and rapid service delivery -- and delivered in a user-friendly catalog of services easily accessible through a single pane of glass. In essence, cloud orchestration = automation + integration + best practices.
Without cloud orchestration, it’s difficult to realize the full benefits of cloud computing. The stitching together of best practices and automated tasks and processes becomes essential to optimize a wide spectrum of workloads types.
cynthyap 110000GC4C Tags:  virtualization provider cloud csp cloud-computing msp cloud_computing service provisioning 8,355 Views
With the proliferation of cloud computing, many businesses are starting to adopt a service provider model—either as a deliberate strategy to establish new revenue streams or, in some cases, inadvertently to support the growing needs of their organizations. This is especially true for companies with diverse needs, whether they’re tech companies with dev teams churning out new apps and services, or business owners driving requirements for SaaS services and cloud capabilities to enhance their data center operations.
In any event, the distinction between managed service providers (MSP) or cloud service providers (CSP), and companies growing in-house capabilities may not be as important as the common need to respond quickly and scale to support customer needs. The challenges facing all of these companies include facilitating the creation of new applications and services while maintaining quality of service, and the need for automation to reduce human resources and error from manual tasks—all with an eye to drive revenue and acquire new customers.
And so, the challenge for service providers of any kind is to increase scalability, automation and uptime while constraining costs. Companies are increasingly solving the critical piece of this puzzle by embracing rapid, high-scale provisioning and key cloud management capabilities to allow them to grow as quickly as their customers’ needs. In particular, the benefits accrue in four key areas.
First, applications can be deployed rapidly across private and public cloud resources.
Second, rich image management tools simplify complex and time consuming processes for creating virtual images and constraining image sprawl.
Third, operational costs can be lowered by leveraging existing hardware to support an array of virtual servers and diverse hypervisors.
And fourth, high-scale provisioning enables rapid response to changing business needs with near-instant deployment of hundreds of virtual machines.
While the spectrum of virtualization to orchestration functionality helps to manage their environments, high-scale provisioning in particular offers a cost-effective way to leverage capacity as a business commodity—a way for service providers to offer seemingly limitless capacity to their customers while lowering the relative costs of providing it.
In the case of Dutch Cloud, a CSP based in the Netherlands, a growing client base allowed the company to expand but it was very conscious of the costs and issues related to scalability, performance and security. By adopting a lightweight, high-scale provisioning solution for core service delivery, Dutch Cloud added capacity easily and was able to scale up rapidly without interruption to customer service. The CSP also reduced its administrative workload by 70 percent by adopting automation best practices. Monthly revenue has tripled twice in the last six months without an increase in operational costs.
Other service providers such as SLTN, a systems integrator serving large and mid-sized businesses, have experienced similar cost savings by extending platform managed services to a cloud delivery model. By implementing a low-touch, highly scalable cloud as its core delivery platform across multiple compute and storage nodes, SLTN was able to deploy new services in seconds rather than hours. It was also able to utilize existing commodity skills without significant training, integrate the existing mixed environment and minimize operational administration and maintenance. The underlying IaaS cloud capabilities allowed SLTN to be more efficient and to provide the full spectrum of cloud services to their own customers in a pay-as-you-go model—with better service and at a lower price point.
The benefits that these companies experienced are evidence that high-scale provisioning and cloud management capabilities can dramatically increase service capacity. For service providers of all stripes—whether deliberate or not—these benefits are a critical part of the evolution of cloud services and offer a meaningful way to deliver more value to themselves and their users.
cynthyap 110000GC4C Tags:  cloud-computing computing cloud provisioning automation virtualization 5,527 Views
With the barrage of cloud news constantly hitting the market, it can be challenging for organizations to differentiate between all of the solutions and capabilities out there.
But with the latest cloud offering from IBM, the value proposition is quite simple—you get a low-cost, low-risk entry to cloud computing with compelling features. This is especially important for organizations who are still trying to leverage the cost savings of virtualization.
Our customers have told us they’re looking to cloud computing to increase agility—the ability of IT to evolve and meet business needs—and they’re looking for ways to control expenses related to IT investments. They also want to reduce IT complexity while at the same time increase utilization, reliability and scalability of IT resources. And they are looking for the ability to expand capabilities gradually, as their needs change and grow.
In designing a solution to meet all of these needs, we developed IBM SmartCloud Provisioning. Using industry best practices for cloud deployment and management, this new solution allows organizations to quickly deploy cloud resources with automated provisioning, parallel scalability and integrated fault tolerance to increase operational efficiency and respond to user needs.
The name doesn’t tell the whole story though. IBM SmartCloud Provisioning is a full-featured solution wrapped up in an easy-to-implement package. That means you get:
- Rapidly scalable deployment designed to meet business growth
- Reliable, non-stop cloud capable of automatically tolerating and recovering from software and hardware failures
- Reduced complexity through ease of use and improve time to value
- Reduced IT labor resources with self-service requesting and highly automated operations
- Control over image sprawl and reduced business risk through rich analytics, image versioning and federated image library features
Using this technology, we’ve seen customers get a cloud up and running in just hours—realizing immediate time to value. It’s fast—administrators have been able to go from bare metal to ready-for-work in under five minutes, or start a single VM and load OS in under 10 seconds, or scale up to 50,000 VMs in an hour (50 nodes).
But ultimately, these IT benefits have translated to business benefits—customers have been able to see how cloud computing can impact their business, and how they can accelerate the delivery of new services to drive revenue.
Nimesh Bhatia 1100006UQ3 Tags:  dashboard sco cloud-analytics infohub cloud-computing cloud 10,164 Views
IBM made a significant commitment to OpenStack by joining the OpenStack Foundation as a Platinum Member. The IBM SmartCloud Orchestrator v2.2 product has adopted OpenStack to provide enterprises the functionality needed to effectively create and manage their cloud implementations.
The IBM Cloud Labs team is innovating in the area of cloud analytics. A new feature has been created named Information Hub for SmartCloud Orchestrator that adds exciting new reporting dashboards. The new feature will be available as an add-on at ISM Cloud MarketPlace.
The Information Hub dashboard has been designed for cloud users, administrators, planners and decision makers to provide information about the cloud infrastructure at their finger tips. It provides usage trend graphs, determines when a critical resource will run out, and aggregates the information for multi-OpenStack environments. Additionally, the information is made available for mobile devices.
These capabilities improve the productivity for cloud users and administrators. It helps cloud capacity planners to see the pace of cloud adoption in the enterprise and plan ahead. Decision makers can take the information with them and make informed business decisions about the cloud infrastructure.
cynthyap 110000GC4C Tags:  cloud-monitoring cloud-computing cloud provisioning vmware virtualization 4,698 Views
The challenges of virtualized environments are driving the shift to greater integration of service management capabilities such as image and patch management, high-scale provisioning, monitoring, storage and security. Join us for this webcast to learn how organizations can realize the full benefits of virtualization to reduce management costs, decrease deployment time, increase visibility into performance and maximize utilization.
cynthyap 110000GC4C Tags:  cloud-computing virtualization orchestration openstack cloud 5,040 Views
Determine the right cloud orchestration strategy to address the unique needs and pain points of your organization while increasing productivity and spurring innovation. And learn more about the recently announced orchestration capabilities from IBM that leverage OpenStack to manage heterogeneous hybrid environments. Sign up today!
cynthyap 110000GC4C Tags:  cloud-computing orchestration cloud openstack virtualization 6,197 Views
Even if you weren’t at IBM Pulse, trending right now on the web is chatter about IBM’s announcement to leverage open technologies pervasively in the development of its cloud offerings.
With IBM SmartCloud Orchestrator—an integrated platform to standardize and manage heterogeneous hybrid environments—IBM is launching its first commercial offering based on OpenStack. And with SmartCloud Orchestrator, IBM is also redefining the scope of orchestration to encompass the streamlining and integration of all resources, workloads and services.
The need for this kind of capability is addressed in the latest IDC report which discusses why it will become a priority as organizations look to improve operational efficiency and reduce the mess and complexity of growing data centers.
The ability to standardize and automate cloud services includes integrating performance and capacity management, usage and accounting, and rich image lifecycle management. In addition, services and tasks such as compute and storage provisioning, configuration of network devices, integration with service request and change management systems and processes can all be streamlined. Out-of-the-box robust workload patterns also enable fast development of cloud services.
With SmartCloud Orchestrator, it’s all brought together to seamlessly manage heterogeneous environments, allowing organizations to build on existing investments and open source technologies.
If you haven’t had time to catch up on what’s trending, here’s the short version on how IBM is helping to advance the cloud to drive innovation.
cynthyap 110000GC4C Tags:  virtual-infrastructure cloud_cost_management cloud-computing cloud virtualization 4,528 Views
Even though server proliferation can be partially addressed through virtualization, the usage of virtual and physical assets becomes complex to accurately assess or manage. Cost management is crucial to integrate into overall service management, especially with a move into cloud. This webcast discusses how to implement a financial management roadmap and the key requirements for cloud transparency-- the ability to allocate IT costs, usage, and value.
Register today: http://bit.ly/VXXxl3
cynthyap 110000GC4C Tags:  virtualization cloud-computing provisioning virtual_image virtual_image_library virtual_image_consolidati... cloud 6,561 Views
It’s been estimated that the number of virtual machines in data centers has increased at least tenfold in the last decade. More than fifty percent of virtualized environments now have more than one brand of hypervisor. The hypervisor promise of cutting infrastructure expense has given way to increases in licensing costs of more than three hundred percent. And the average number of images destroyed? Nobody knows.
In short, the challenges of managing virtualized environments are mounting. The benefits of virtualization—from cost and labor savings to increased efficiency—are being threatened by its staggering growth and the resultant complexity.
A critical piece to solving these challenges, as many organizations have already discovered, is image management. While there are many ad hoc and isolated solutions, there continues to be a real need for comprehensive image lifecycle management to combat image sprawl, get more visibility and analysis into where images are stored and how they are being used, and to ensure security through timely patching of images. This doesn’t necessarily mean jumping to cloud solutions, especially for businesses that aren’t ready to adopt cloud orchestration yet, but rather, implementing image capabilities in the virtualized environment that are robust enough to help with high-value applications and the on-ramp to advanced cloud capabilities.
Because images are easy to create and copy, it’s often difficult to decipher which images are crucial, where there is redundancy and where there may be a need for more governance. It is also an ongoing challenge to understand what an image consists of without launching it. This image complexity has resulted in IT spending a significant portion of their time on mundane or repetitive tasks such as manually building images and maintaining an image library.
Inserting automation best practices into the process of creating, deploying and managing images can result in immediate time and labor savings, with as much as 40-80% labor cost reduction by increasing image/admin ratio efficiency. Automation also helps to optimize the efficiency and accuracy of service delivery in the data center.
Once images are captured they can be deployed as often as needed. Paired with robust, automated, high-scale provisioning, hundreds of new virtual machines can be deployed in minutes, increasing IT efficiency. They can also be customized based on user needs.
Key to effective image analysis (including image search, drift, version control and image vulnerability) is the use of a federated image library, which pulls together the storage and meta information of images across multiple image repositories and hypervisors.
Image search: With a large amount of image information to contain and understand, it can become difficult to determine the connection between images or their origin. A family-tree hierarchy and grouping of images with version chains simplifies image search by showing how images are linked, when they are in use and where they originated, even in a mixed hypervisor environment. Additionally, searching capabilities within images drastically reduces the complexity of finding the right image and associated information about it.
Image drift: Varying image iterations make it difficult to manage compliance and version control. Frequently, administrators are forced to maintain volumes of duplicate and unnecessary images because it is difficult to ascertain the need, use or ownership of images. Advanced image management can increase visibility into what is inside a virtual machine through a centralized image library, to determine opportunities to consolidate images, or determine if there are security threats from vulnerable images.
With the explosion of images to govern, there is a need to be able to detect vulnerability exposures in images to ensure that no virtual machines are created without the proper level of security patches. All systems, both physical and virtual, need to be patched whether they are distributed or part of the cloud. A simplified, automated patching process can administer virtual images from a single console so you have the scalability to patch as quickly as you can provision, allowing users to maintain golden and copied images in a patched state. With this patching capability, policy enforcement can be accomplished and proven in minutes instead of days, and IT can increase the accuracy and speed of patching enforcement, achieving as much as 98% first pass patch success rate in hours.
The benefits of a comprehensive, integrated image management solution are immediately obvious. Best of all, there is a high degree of reward with very little risk.
And with image sprawl under control, organizations can expand capabilities for richer end-to-end service management across the virtualized infrastructure such as performance management and data protection as well as look to higher value cloud capabilities for faster service delivery.
For more information, here’s an in-depth look at image management as well as trial code to test out image lifecycle management capabilities.
cynthyap 110000GC4C Tags:  patch provisioning cloud cloud_computing cloud-computing security 4,813 Views
Hosted by developerWorks: 27 September 2012, 10:30 a.m.
Most organizations are embracing virtualization and some level of cloud computing. They are facing the inevitable problems associated with security and VM sprawl which results from the rapid proliferation of virtual machines. Two of the most critical problems to address in virtual machine management are the patching of all these new virtual systems and ensuring that they are complaint with corporate and security standards.
Join this session to learn how the combination of IBM SmartCloud Provisioning and IBM Endpoint Manager can help you to better manage both virtual and physical environments, and improve endpoint security while driving to higher levels of automation. Register now!
Mike Gare is a Product Manager with IBM with almost 20 years experience in a variety of information technology areas including: telecommunications, mobile technologies, data center automation and Cloud Computing. Mike has performed a variety of leadership roles, many that involved working directly with customers to help them understand, compare, and choose the best business solution for their needs. He has worked in software product management throughout IBM and is currently responsible for Tivoli Provisioning Manager as well as some newer Cloud and Server Automation solutions.
Noah Salzman is a Product Manager for the IBM Endpoint Manager product line, formerly known as BigFix. Noah has been part of the Endpoint Manager team for the last five years and has seventeen years industry experience working with enterprise software - spanning systems management, data encryption, and network security technologies. Noah's work at BigFix and IBM has focused on security architecture, Government certification programs, and the security and compliance product line. Previously, Noah has worked on software development teams at Apple, PGP, and nCircle.
cynthyap 110000GC4C Tags:  devops agile cloud development cloud_computing cloud-computing provisioning 4,656 Views
Get involved with the new IBM DevOps project and beta on Jazz.net-- IBM SmartCloud Continuous Delivery is an agile, scalable and flexible solution for end-to-end lifecycle management and automation, creating an environment that takes collaboration between Development and Operation teams to the next level. Learn more.
cynthyap 110000GC4C Tags:  security management provisioning virtualization patch cloud-computing cloud 4 Comments 9,297 Views
We know that cloud computing offers a myriad of benefits like rapid service delivery and lower operating costs. But it can also lead to challenges in data governance, access control, activity monitoring and visibility of dynamic resources—in essence, all aspects of IT security.
The IT organization must have the capabilities to both deliver services more quickly to meet the demands of the business and be able to provide high levels of security and compliance. In the past the delivery of the services was typically the bottleneck in providing new services, but now with automated cloud and self service delivery models the teams responsible for change management and security can quickly become the bottleneck due to manual processes and siloed tools.
For example, organizations need the ability to patch all of their systems, both physical and virtual, whether distributed or part of a cloud. Operations teams need better insight into and control of deployed virtual systems, including OS patch levels, installed middleware applications and related security configurations. And there can be too many security exposures with offline and suspended VM’s that haven’t been patched in weeks or months.
A holistic approach is needed that addresses rapid provisioning of services and automation of key security and compliance requirements. Together these capabilities can keep you in control of rapidly changing cloud environments. First let’s look at the capabilities needed in a cloud provisioning solution.
Cloud provisioning should combine application and image provisioning for workload optimized clouds and deliver:
· Reduced costs with automated high-scale provisioning; multiple hypervisor options and HW of choice
· Accelerated time-to-market with standardized pattern-based deployment for workload optimized cloud
· Image sprawl prevention with in-built advanced image lifecycle management capabilities
· Ease of adoption and clear roadmap to move to advanced cloud capabilities
Second, a unified endpoint management approach is required to provide visibility and control of your systems, regardless of context, location or connectivity, and needs to deliver:
· Heterogeneous platform support with seamless patch management for multiple operating systems, including Microsoft Windows, Unix, Linux and Mac OS, as well as hypervisor platforms
· Automatic assessment and “single click” remediation, which shortens time to compliance by automatically identifying necessary patches and enabling users to target and remediate endpoints quickly
· Enterprise-class scalability and security to provide proven scalability, including fine-grained authorization and access control capabilities
Explore these capabilities with the new IBM SmartCloud Patch Management.