Cloud Computing Central
FleetCor Selects Brocade to Provide Cloud-Optimized Network Services for 500,000 Commercial Accounts
Leading Fuel Card Provider Values Brocade Market Leadership, Reliability and Network Security
SAN JOSE, CA -- (MARKET WIRE) -- 07/19/11 -- Brocade (NASDAQ: BRCD) today announced that FleetCor, a leading independent global provider of specialized payment products and services to businesses, commercial fleets, major oil companies, petroleum marketers and government fleets, has selected Brocade as the vendor to build its cloud-optimized network. This new network enhances FleetCor's ability to securely process millions of transactions monthly and ultimately better serve its commercial accounts in 18 countries in North America, Europe, Africa and Asia.
Millions of commercial payment cards are in the hands of FleetCor cardholders worldwide, and they are used to purchase billions of gallons of fuel per year. Given this volume of network-based transactions, network reliability, scalability and security were critical factors for FleetCor to consider in its selection process to maintain superior customer satisfaction.
In addition, FleetCor selected Brocade as its networking expert to help evolve its data center and IT operations into a more agile private cloud infrastructure. Brocade® cloud-optimized networks are designed to reduce network complexity while increasing performance and reliability. Brocade solutions for private cloud networking are purpose-built to support highly virtualized data centers.
"When we evaluated networking vendors to build our private cloud, we looked at market leadership and non-stop access to critical data," said Waddaah Keirbeck, senior vice president global IT, FleetCor. "Brocade cloud-optimized networking solutions are perfect for our data centers because they allow us to optimize applications faster, virtually eliminate downtime and help us meet service level agreements for our customers. Moving to a cloud-based model also provides us the flexibility to make adjustments on the fly and access secure information virtually anywhere and anytime."
FleetCor installed a Brocade MLXe router for each of its three data centers, citing scalability as a major driver for the purchase. This approach enables FleetCor to virtualize its geographically distributed data centers and leverage the equipment it already has, at the highest level, to achieve maximum return on investment. The Brocade MLXe provides additional benefits for FleetCor by using less power and has a smaller footprint than competitive routers; critical in power-and space-constrained locations in order to allow for growth. The Brocade MLXe also enables continuous business operation for FleetCor based on Multi-Chassis Trunking, massive scalability supporting highest 100 GbE density in the industry with no performance degradation for advanced features like IPv6 and flexible chassis options to meet network and business requirements.
The Brocade ServerIron ADX Series of high-performance application delivery switches provides FleetCor with a broad range of application optimization functions to help ensure the reliable delivery of critical applications. Purpose-built for large-scale, low-latency environments, these switches accelerate application performance, load-balance high volumes of data and improve application availability while making the most efficient use of the company's existing infrastructure. It also delivers dynamic application provisioning and de-provisioning for FleetCor's highly virtualized data center, enables seamless migration and translation to IPv6 with unmatched performance.
As an added benefit for its bottom line, through the use of Brocade ADX Series switches and Brocade MLX™ Series routers FleetCor has eliminated thousands of costly networking cables, saving it hundreds of thousands of dollars and allowing the company to segment, streamline and secure its network. FleetCor has also been able to easily integrate Brocade network technology with third-party offerings already installed in the network, for complete investment protection. FleetCor anticipates moving to 10 Gigabit Ethernet (GbE) solutions for its backbone switch in the near future.
"We wanted a dependable, secure, redundant, 24 by 7 backbone switch in each of our data centers to help us leverage the benefits of cloud computing and the Brocade MLXe delivered on all fronts," said Keirbeck. "By virtualizing our data center, Brocade allows for non-stop access to the mission-critical data that FleetCor and its customers rely on every day. We chose the Brocade MLXe because of the tremendous results we already saw from our existing Brocade solutions and the exceptional support and service."
According to a report from analyst firm Gartner, "Although 'economic affordability' is an immediate, attractive benefit, the biggest advantages (of cloud services) result from characteristics such as built-in elasticity and scalability, reduced barriers to entry, flexibility in service provisioning and agility in contracting."(1)
Social Media Tags: Brocade, LAN, Local Area Network, ADX, ServerIron, MLX, MLXe, reliability, scalability, security
(1)Gartner " Cloud-Computing Service Trends: Business Value Opportunities and Management Challenges, Part 1" February 23, 2010
Brocade, the B-wing symbol, DCX, Fabric OS, and SAN Health are registered trademarks, and Brocade Assurance, Brocade NET Health, Brocade One, CloudPlex, MLX, VCS, VDX, and When the Mission Is Critical, the Network Is Brocade are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners.
Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government.
AndyGroth 060000C0EQ Tags:  smartcloud vmware paas kvm ibm_workload_deployer cloud red_hat 6,128 Views
Note: This is a (slightly updated) re-post from a personal blog - just my view in the context of IBM's drive to foster open choice and collaboration.
Please bear in mind that this is based on my personal thoughts (not an official IBM position) and read the article as it is intended to be - thought provoking ... enjoy!
Having returned from the European Red Hat Partner summit and the VMware vForum where I presented on behalf of IBM, it took me a while to digest the “openness” of it all …so let me share my thoughts retrospectively.
Being proprietary rocks…! (?)
Public cloud can only exist on open source … ?
There was a bold statement by a speaker at the Red Hat summit: “Public cloud can only live on open source!”
In the meanwhile keep your eyes peeled and expect the industry to increase focus on enabling hybrid connectors - I obviously can't make any specific forward-looking statements from an IBM perspective. But just take Red Hat as an example, it made clear that CloudForms (their IaaS platform) can indeed manage VMware though their DeltaCloud driver and – while currently positioning CloudForms for private and hybrid – their vision (of course) is for DeltaCloud to be the top-level public layer linking into private (or public) VMware clouds.
Red Hat recently announced their hosted “OpenShift” PaaS platform which essentially allows developing and running Java, Ruby, PHP and Python applications and comes in 3 different editions. From 1) “Express” (free) which provides a runtime environment for simple Ruby, PHP and Python apps over 2) “Flex” for multi-tiered Java and PHP apps with more options (like mySQL DBs and JBoss middleware) to full control with the “Power” edition supporting “any application or programming language that can compile on RHEL 4, 5, or 6″ and enables to deploy apps directly on EC2 and (in the near future) to IBM’s SmartCloud.
VMware had before announced their own open “Cloud Foundry” PaaS project, it has incarnations as fully hosted service (currently in beta), as open source project (CloudFoundry.org) or a free single PaaS instance for local development use.
So what's IBM doing in this space? IBM has recently announced the IBM Workload Deployer - an evolution of the WebSphere CloudBurst hardware appliance. It essentially stores and secures "WebSphere Application Server Hypervisor Edition images" and more importantly workload patterns which can be published into a cloud. These workload patterns (think of them as customizable templates that capture settings, dependencies and configuration required to deploy applications) enable you to focus on what essentially differentiates PaaS from IaaS ... the application rather than the infrastructure. Dustin Amrhein explains this much better than me in this little blog.
So, yes, I honestly believe that KVM has a good chance to become hypervisor of choice for public cloud. However … that is unlikely to be the control point… . So which management platform(s) will take that all important crown …? Will it be an OSS based one? I don’t want to hazard a guess, there are many …and that is part of the problem, many argue that the open source “communities” will have to overcome a challenge and become a COMMUNITY if they want to succeed. ESX could not be beaten with 7 or 8 different (but weak) flavours of Xen and that was just a single OSS project splintered by commercial offerings … in the same way the sea of OSS based cloud controllers with eucalyptus, openstack, cloudstack, deltacloud, opennebula faces focussed (more proprietary) heavy-weights like Microsoft, Google and Amazon.
The increasing number of OSS management solutions and “open bodies” will also make e.g. VMware less nervous than intended as long as they indirectly compete with each other …
BUT (and it’s a big “but”) I would argue that anyone not strategically looking at these open solutions is at best ignorant or – e.g. if you are a service provider yourself – more likely long-term professionally suicidal … yes, in an ideal world everyone wants ‘today’s best of breed' but more critically you have to maintain your negotiation potential through the ability to switch and if only for that reason alone you need to keep your options open!
It will be of the utmost importance to partner with solution providers who share this mind-set and have the capability and strategy to support such a long-term goal and yes, IBM is clearly uniquely positioned to fulfill this role.
And while I spoke to many completely different clients at both events, that was a common concern raised by most of them.
Industry endorsement like the recent OVA announcement - with IBM being a major driving force and supporter - will help to give KVM the needed credibility and weight … I am looking forward to seeing these visions translated into tangible solutions.
- Test Drive the IBM SmartCloud with this simulator...
- CloudForms (IaaS) is in beta with availability planned for fall 2011
JeffHebert 060001UEQ2 Tags:  storage scalable secure emerging paas technolgy networking iaas servers cloud available reliable saas 4,113 Views
Great Video. There are a great many folks that have already started making the journey into the clouds and are not fully aware; If you consider that most of all large Enterprise Data centers are consolidating and visualizing servers, storage and networking today, and after all, when you get all 3 of those areas consolidating and visualizing you are transforming business processes and will eventually reach a point when infrastructure/information on demand will be the next logical step.
Cloud Service Provider Platform (CSP2)
Till now we have seen through the earlier posts – what are
the essentials to go about creating a cloud environment – that consists of the management
platform as well as the managed environment. We have seen the critical
roles and organizations involved as well as the importance of Cloud
Service Strategy and Cloud
Service Design. We also saw the criticality of the need for a Cloud
Computing Reference Architecture (CCRA) to tie all the solution elements
together. We also saw how IBM
Service Delivery Manager (ISDM) which is an enterprise cloud solution based
Service Automation Manager (TSAM) can be deployed as a set of virtual
images that automate IT service deployment and provide resource monitoring,
cost management, and provisioning of services in the cloud.
The IBM Cloud Service Provider Platform is specifically tailored to the needs of CSPs and is designed to help them successfully:
Figure 1 IBM Integrated Service Management Solution for Cloud Service Providers
IBM Cloud Service Provider Platform is an integrated Service Management for Cloud Service Providers is built upon around a core Service Automation and Management component provided by ISDM. Beyond the core, IBM’s Integrated Service Management for Cloud Service Providers makes available four extensions—network management, security management, storage management, and advanced monitoring and service level management—that enables a comprehensive management offering.
Communications service providers (CSPs) around the world are looking for smarter ways of doing business. They are being challenged to transform the way services are created, managed, and delivered. CSP2 neatly integrates and extends the SPDE (Service Provider Delivery Environment) for Communication Service Providers to build the ecosystem to become a cloud service provider. For a cloud based business strategy - check out the video from Scott on the value of CSP2 for CSPs.
In this article learn how to:
Set up a 64-bit Linux instance (a Bronze-level offering) with the Linux Logical Volume Manager (LVM).
Capture a private image and provision it as a new Platinum instance.
Grow the LVM volume and file system to accommodate the new physical volumes.
Configure LVM across physical volumes using Linux LVM-type partitions.
Background on LVM and the test scenario
First, a description of LVM concepts and the test scenario for those who may not be familiar with LVM.
Note: You are about to configure Linux LVM: Here be Dragons. Mind the gap.
The Linux LVM is organized into physical volumes (PVs), volume groups (VGs), and logical volumes (LVs)
Sreek Iyer 2000001K7N Tags:  stepbystep tsam cloud_certification chapter16 tivoli cloud cloud-computing isdm 1 Comment 5,368 Views
Capacity Planning for the Management Platform
The management platform sizing means sizing for the following components that provides the functional capabilities
Further the sizing will be affected based on the non-functional consideration that needs to be addressed by each of these components of the management platform. One should review the performance reports and workload pattern/handling capabilities of each of the products selected to validate the sizing considered can meet the non-functional requested by the solution.
The size of the management platform depends on the size of the managed environment. It is
preferred to keep a centralized management environment and scale it as needed
when the managed environment grows. This is often not an easy calculation or simple process. Need to apply pure engineering to plan the capacity for each capabilities. Apart from the capabilities discussed above, the following key areas also needs to be covered
Tivoli Service Automation Manager Version 7: Capacity Planning Cookbook is an excellent document covering the various aspects in detail as well as provide some samples.
This book also gives links to some of the other whitepapers that provides for interesting further reading material on the subject.
JeffHebert 060001UEQ2 Tags:  virtual paas enterprise elastic secure cloud iaas saas scalable ibm reliable 5,961 Views
JeffHebert 060001UEQ2 Tags:  iaas server ibm analytics virtualize cloud saas software paas storage 4,869 Views
JeffHebert 060001UEQ2 Tags:  brocade switching iaas networking paas ibm cloud saas 3,984 Views
In Collaboration With Ixia, Brocade Will Demonstrate the Performance, Reliability and Advanced Feature-Set of the Industry's First 100 GbE Terabit-Trunk Router
LAS VEGAS, NV -- (MARKET WIRE) -- 05/09/11 -- - INTEROP 2011 -- Brocade (NASDAQ: BRCD) today announced that it will work with Ixia (NASDAQ: XXIA) to replicate mission-critical service provider environments and test high-capacity Brocade® Ethernet network solutions designed to help service providers become cloud-optimized. The demonstration creates a true-to-life service provider infrastructure scenario for increasing IPv4/IPv6 routing scalability within the core Multiprotocol Label Switching (MPLS) network while retaining high service levels for end customers. The demonstration will be held in the Brocade booth (# 833) during Interop Las Vegas 2011, at the Mandalay Bay Convention Center.
As service providers evolve to become destinations offering cloud-based services, rather than just basic data delivery, the performance and scalability demands on their networks have increased significantly. The Brocade MLXe Core Router is a 100 Gigabit Ethernet (GbE)-ready solution that enables service providers and virtualized data centers to support these demands by efficiently delivering cloud-based services that use less infrastructure and help reduce expenditures.
In this specific demonstration, Brocade and Ixia will test the IPv4/IPv6 traffic flows, MPLS and throughput capabilities of the Brocade MLXe multiservice router over 10 and 100 GbE connections. By leveraging Ixia's leading test solutions, attendees will be able to view the following:
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron, SAN Health, ServerIron, TurboIron, and Wingspan are registered trademarks, and Brocade Assurance, Brocade NET Health, Brocade One, Extraordinary Networks, MyBrocade, VCS, and VDX are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners.
© 2011 Brocade Communications Systems, Inc. All Rights Reserved.
JeffHebert 060001UEQ2 Tags:  paas storage saas cloud consolidate iaas servers networking virtualize 3,616 Views
JeffHebert 060001UEQ2 Tags:  saas consolidate servers cisco iaas cloud virtualize storage paas network inernet 4,301 Views
Cisco’s apparently going to try to simplify its sales, services and engineering organizations in the next 120 daysBy Maureen O'Gara
Faced with a nasty loss of credibility, a string of poor financial results, shrinking market share in its core business, an unwieldy and alienating bureaucracy blamed for the top executive exodus it been experiencing, and a stock price that's plunged into the toilet Cisco, once an economic bellwether, is promising to do more than simply kill off its once-popular Flip video camcorder business and lay 550 people off, an admission that its foray into the consumer segment had largely failed.
It said in a press release issued Thursday morning that it's going to a "streamlined operating model" focused on five areas, not apparently the literally 30 different directions it's been going in although it did say, come to think of it, something about "greater focus" so maybe it's not really cutting back.
These focus areas are, it said, "routing, switching, and services; collaboration; data center virtualization and cloud; video; and architectures for business transformation."
Nobody seems to know what that last one is and the Wall Street Journal criticized Cisco for not being able to explain in plain English what it's doing and Barron's complained that it needed a Kremlinologist to decrypt the jargon in the press release.
Anyway Cisco's apparently going to try to simplify its sales, services and engineering organizations in the next 120 days or by July 31 when its next fiscal year begins. Well, maybe not everything, it warned, but sales ought to be reorganized by then.
This streamlining seems to mean that:
It's unclear whether any of this means layoffs.
Cisco piped in a quote credited to Moore saying. "Cisco is focused on making a series of changes throughout the next quarter and as we enter the new fiscal year that will make it easier to work for and with Cisco, as we focus our portfolio, simplify operations and manage expenses. Our five company priorities are for a reason - they are the five drivers of the future of the network, and they define what our customers know Cisco is uniquely able to provide for their business success. The new operating model will enable Cisco to execute on the significant market opportunities of the network and empower our sales, service and engineering organizations."
JeffHebert 060001UEQ2 Tags:  sass cloud iaas infrastructure virtualize paas unified converged servers storage /cloud networking 4,307 Views
IBM Systems Storage
Brocade OEM Partners Provide Support for Fibre Channel Fabric Innovation to Enable Cloud-Optimized Networks
JeffHebert 060001UEQ2 Tags:  servers converged paas cloud iaas brocade networking ibm storage saas unified 5,058 Views
JeffHebert 060001UEQ2 Tags:  converge saas storage iaas networking paas cloud unify virtualize gartner 6,299 Views
JeffHebert 060001UEQ2 Tags:  effective cloud storage performance paas iaas global ibm efficient 5,108 Views
Optimization of SAP Infrastructure to result in better performance, low costs and high energy efficiency
EHNINGEN, Germany, - 20 Apr 2011: Today IBM (NYSE: IBM) announced that Audi selected IBM to build a cloud environment for Audi's SAP infrastructure to deliver higher performance, fast and flexible provisioning of SAP applications and capacities, lower infrastructure costs, and to deliver above-average energy efficiency with the ability to enlarge future SAP applications to an almost unlimited extent.
Audi was facing challenges to scale its IT systems by the increased use of business-critical applications in areas such as production and logistics, supplier relationship management and human resources which challenged their IT infrastructure regarding reliability and flexibility.
In April 2010, Audi signed a contract with IBM to rebuild their existing SAP infrastructure, including consolidation and virtualization of the server hardware, process standardization, opportunities for performance-related billing and a much higher operational flexibility. Audi's new SAP Infrastructure solution is based on a new generation of high-performance IBM POWER 7 Servers and IBM database technology (DB2).
"Along with a very high level of reliability and failure safety, the new SAP Infrastructure solution, which we will migrate into a private cloud, substantially lowering energy consumption," said Audi's Lorenz Schoberl, head of IT Infrastructure Services. "The DB2 solution's built-in data compression capability will enable us to save time and reduce costs of storage and archiving."
"We were able to demonstrate that our combination of POWER servers and DB2 will decrease the total cost of ownership over the next four years -- from a business and technology point of view," said Gunter Frohlich, IBM Client Manager for Audi.
The new infrastructure is fully operational and will be managed by IBM in a private cloud environment hosted in Audi's data center.
About IBM Cloud Computing
IBM has helped thousands of clients adopt cloud models and manages millions of cloud based transactions every day. IBM assists clients in areas as diverse as banking, communications, healthcare and government to build their own clouds or securely tap into IBM cloud-based business and infrastructure services. IBM is unique in bringing together key cloud technologies, deep process knowledge, a broad portfolio of cloud solutions, and a network of global delivery centers. For more information about IBM cloud solutions, visit www.ibm.com/smartcloud
For more about IBM, visit www.ibm.com/de/pressroom.
Brocade Leads OpenFlow Adoption to Accelerate Network Virtualization and Cloud Application Development
JeffHebert 060001UEQ2 Tags:  servers networking services iaas saas cloud storage switch paas 4,117 Views
SAN JOSE, CA -- (MARKET WIRE) --
SDN involves several components, one of the most important being standard-based OpenFlow, an emerging standard delivering service providers granular control of their network infrastructures. Brocade will leverage its work in developing OpenFlow across its high-performance service provider portfolio to enable customers to build high-value applications across their networks with greater efficiency and unparalleled simplicity.
Today's service providers and network operators face a number of challenges that require multiple solutions in order to ensure highly efficient and profitable operation. Brocade's goal in working with the
Brocade has developed an OpenFlow enabled IP/MPLS router as part of its service provider product portfolio for application verification and interoperability testing with its partners and customers. Brocade plans to make additional OpenFlow strategy and product announcements later this year. Brocade will initially focus its efforts on delivering solutions that enable the scalability and manageability required in hyper-scale cloud infrastructures.
"Stronger definition of network behavior in software is a growing trend, and open interfaces are going to lead to faster innovation," said
Social Media Tags: Brocade, OpenFlow, NetIron, Storage Area Networks, SAN, IP, Fibre Channel, Ethernet, WAN, LAN, Networks, Switch, Router
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron,
Brocade CTO Named to TechAmerica CLOUD(2) Commission
Commission to Provide Recommendations on Deployment of Cloud Technologies to the United States Federal Government
The commission's mandate is to deliver recommendations to the U.S. government on ways it can effectively deploy cloud technologies and set specific public policies that will help drive further cloud innovation in both the private and public sectors.
Brocade has direct and highly relevant experience in the challenges and opportunities that the CLOUD(2) Commission is addressing by the virtue of its 15 years of experience in building mission-critical data center networks for some of the most demanding IT environments in the world. This experience and expertise has positioned Brocade to address the challenges on moving to more agile, flexible cloud IT models.
The Brocade approach, as defined by its Brocade One™ strategy, is to help its customers migrate smoothly from current networking architectures to a world where information and applications reside and can be accessed anywhere through open, multivendor cloud technologies.
"Brocade is an established leader in building and deploying fabric-based data center architectures, and customers continue to trust their networks to Brocade as they move to highly virtualized and cloud models," said
The commission will make recommendations for how government should deploy cloud technologies and address policies that might hinder U.S. leadership of the cloud in the commercial space. Recommendations for government deployment will be presented to Federal Chief Information Officer
The commission is composed of 71 experts in the field, from both the business and academic worlds. Leading the CLOUD(2) commission are co-commissioners
Also joining co-chairmen Benioff and Capellas representing academia will be
A full list of commissioners is available at http://www.techamericafoundation.org/cloud-commission-commissioners
To learn more about CLOUD(2), please visit http://www.techamericafoundation.org/cloud-commission
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron,
Sreek Iyer 2000001K7N Tags:  cloud cloud-computing tsam stepbystep cloud_computing isdm 1 Comment 6,538 Views
Chapter 14 - Management Platform & Managed Environments
To design a good cloud management platform we need to understand the managed environment. As we know that the workloads would include not only stuff running on virtual infrastructure but also traditional infrastructure. So we need to design a management platform that can support delivery of traditional services as well as cloud services.
The advantage of using IBM reference architecture (refer previous chapter) is that we the service management cost to a minimum and be able to manage multiple services (IAAS, PAAS, SAAS, Traditional Services) through a single management platform (Common Cloud Management Platform).
The design of the management platform is mainly driven by what platforms we need to manage as well as the services we have to deliver. The core components of the management platform are determined by the amount of service automation expected to be provided by the platform.
The cloud management platform can be thought of like a Service Delivery Platform as applied to Telecommunication industries. The term Service Delivery Platform (SDP) usually refers to a set of components that provides a services delivery architecture (such as service creation, session control & protocols) supporting multiple delivery models of service.
The core components can be again classified into the business support (BSS) components and the operational support (OSS) components. The business components include ways to manage the customer, subscription, offering & catalog, contract, order, billing, and financial aspects of the platform. The OSS deals with the backend aspects of fulfilling the service request. So it includes components like service automation, provisioning, monitoring and management.
The IBM Tivoli suite of products supports addressing almost all of the OSS requirements as well as some of the key components in the BSS components. As an architect, the key decisions to take are to look at the capabilities required based on the client needs and create a platform that is extensible. This needs to be done keeping flexibility in mind which means you have the capability to add and remove components to support different capabilities. In an established and mature Data Center, it is highly unlikely that all these components are delivered by a single vendor. That’s why an architecture build on open standards is critical to the success of building a good management platform.
IBM is leading the efforts for adoption of standards by different cloud providers, consumers and tools vendors. The work being done by IBM with Open Group and Cloud Standards Customers Council are some examples for the same.
Once we have determined the functional components of our solution we need to worry about the non-functional requirements. These include aspects like security, availability, resiliency, performance, scalability, capacity planning and sizing. We will need to determine these aspects for the management platform based on the size and heterogeneity of the managed environment. We will discuss these aspects in the next chapter.
JeffHebert 060001UEQ2 Tags:  ibm cloud hybrid paas services saas storage private federal iaas government 3,773 Views
Intel® Cloud Builders Reference Architecture Library
Key challenges and focus areas for IT include enhancing efficiency, security, resource utilization, flexibility, and simplifying data center management, among others. Intel works closely with leading systems and solution providers to deliver proven reference architectures to address IT challenges. This work is based on IT requirements—from a wide range of end users—that address challenges in evolving to cloud and next- generation data centers, including the evolving usage requirements of the Open Data Center Alliance. This lab-based experience is embodied in Intel® Cloud Builders reference architectures. Each reference architecture provides detailed instructions on how to install and configure a particular cloud software solution using Intel® Xeon® processor-based servers.
Developed with ecosystem leaders, the following reference architectures relate to building a cloud, or Infrastructure as a Service (IaaS), and to enhancing and optimizing cloud infrastructure with a focus on security, efficiency, and simplifying your cloud environment.
Learn more about how to build and optimize your cloud infrastructure via reference architecture guides below. Read More>
IBM Joins Forces with Over 45 Organizations to Launch Cloud Standards Customer Council for Open Cloud Computing New user-led group to focus on addressing the challenges and requirements of using an Open Cloud
JeffHebert 060001UEQ2 Tags:  aws cloud private iaas ibm paas hybrid saas google secure 2,948 Views
JeffHebert 060001UEQ2 Tags:  storage efficient servers paas iaas cloud ibm effective networks saas 4,973 Views
In our previous posts on the IT industry’s shift to the Cloud Services era, we’ve provided definitions, market context, user adoption trends, and user views about cloud services benefits, challenges and suppliers.
The development of this forecast involved a team of over 30 IDC analysts, led by Robert Mahowald (Business Applications/SaaS), Tim Grieser (Infrastructure Software), Steve Hendrick (Application Development & Deployment Software), Matt Eastwood (Servers) and Rick Villars (Storage), with additional contributions from David Tapper (Outsourcing/Hosted Services) and John Gantz (Global Research).
SAN FRANCISCO, CA, - 07 Apr 2011: IBM (NYSE: IBM) today unveiled its next generation IBM SmartCloud, an enterprise-class, secure cloud specifically created to meet the demands of businesses.
To accelerate the shift from experimentation, development and assessment to full scale enterprise deployment of cloud, IBM is building out its existing cloud portfolio with IBM SmartCloud, enterprise cloud technologies and services offerings for private, public and hybrid clouds based on IBM hardware, software, services and best practices.
As part of this announcement, IBM is demonstrating a next-generation, enterprise cloud service delivery platform currently piloting with key clients and available later this year. For the first time, enterprise clients will be able to select key characteristics of a public, private and hybrid cloud to match workload requirements from simple Web infrastructure to complex business processes, along five dimensions, including:
· Security and isolation
· Availability and performance
· Technology platforms
· Management Support and Deployment
· Payment and Billing
The IBM SmartCloud includes a broad spectrum of secure managed services, to run diverse workloads across multiple delivery methods both public and private. It includes customer choice with the potential for end-to-end management of service delivery from the server and operating system to the application and process layer.
“The new IBM SmartCloud allows for the best of both worlds – the cost savings and scalability of a shared cloud environment plus the security, enterprise capabilities and support services of a private environment,” said Erich Clementi, senior vice president, IBM Global Technology Services. “In thousands of cloud engagements, we have discovered that enterprise client wants a choice of cloud deployment models that meet the requirements of their workloads and the demands of their business.”
This level of choice and control translates into capabilities customized to your needs and priorities, whether you’re deploying a simple web application, an ordering logistics system or a complete ERP system.
The new IBM cloud can enable organizations, their employees and partners, to get what they need, as they need it – from advanced analytics and business applications to IT infrastructure like virtual servers and storage or access to tools for testing software code - all deployed securely across IBM’s global network of cloud data centers.
The IBM SmartCloud has two implementation options: Enterprise and Enterprise +.
- Enterprise – Available today and expanding on our existing Development and Test Cloud allowing customers to expand on internal development and test efforts with reduction of application development tasks from days to minutes via automation and rapid provisioning with over 30% reduction in costs versus traditional application environments. This offering is available immediately.
- Enterprise + -- To be made
available later this year, Enterprise + will complement and expand on
the value of Enterprise, offering brand new capabilities provide a core
set of multi-tenant services to manage virtual server, storage, network
and security infrastructure components including managed operational
JeffHebert 060001UEQ2 Tags:  it how lotus learn virtualize ibm computing consolidate skills training fundamentals cloud 4,244 Views
Cloud computing fundamentals
Summary: A revolution is defined as a change in the way people think and behave that is both dramatic in nature and broad in scope. By that definition, cloud computing is indeed a revolution. Cloud computing is creating a fundamental change in computer architecture, software and tools development, and of course, in the way we store, distribute and consume information. The intent of this article is to aid you in assimilating the reality of the revolution, so you can use it for your own profit and well being. Learn more>
JeffHebert 060001UEQ2 Tags:  bpaas network storage effective cloud servers efficient saas paas iaas elastic ibm 4,456 Views
Last year’s acquisition policy pronouncements are starting to be felt across to the U.S. Army, with upticks in cloud computing initiatives, increasing use of fixed-price contracts and adoption of social media.
“Army IT spending will remain stable; the goal is to optimize the IT [spending]. Optimization will be guided by computing trends,” said Gary Winkler, Army program executive officer for enterprise information systems.
He was one of several Army acquisition speakers at the AFCEA Belvoir Industry Days conference at the National Harbor in Oxon Hill, Md. Winkler also recently announced he is leaving the Army.
Efforts to improve efficiency, realign spending priorities and streamline a cumbersome acquisition process were launched during the past year amid a tightening national budget by Defense Secretary Robert Gates and Ashton Carter, undersecretary of defense for acquisition, technology and logistics.
Leading the charge for the Army’s efforts to hold down spending and become more efficient are cloud computing initiatives, mobile technologies, data center consolidation and social collaboration, Winkler said.
Winkler said that mobile data traffic is on track to increase by 39 times between 2009 and 2014, and the social software market is showing 40 percent growth per year through 2013 — also contributing to getting the Pentagon’s policies rolling further down in operations.
The Army also wants to increase use of firm fixed-price and multiple-source contracts, as directed in Carter’s Better Buying Power initiative, and looking to maximize broadly scoped contracts that can be used for a variety of missions.
However, there are still plenty of challenges, and there likely will be more to come. Winkler predicted that force reductions could still lie ahead for DOD, citing his own experience in the 1980s when, like now, an insourcing effort was followed by a hiring freeze — which was later followed by layoffs.
“We can tighten our belts and squeeze a little bit [as directed by the Pentagon] — but I think it’s going to be more than just a little bit,” Winkler said.
Still, PEO-EIS has been involved in the development of Better Buying Power tenets, including helping shape concepts and strategies for improving tradecraft services, establishing common taxonomy and reforming IT acquisition — all banner items in Carter’s 23-point acquisition reform plan released last September. Read More>
JeffHebert 060001UEQ2 Tags:  iaas public private consolidation ibm virtualization available reliable paas cloud saas scalable vmware elastic secure 4,361 Views
"Provision public cloud resources or securely extend your internal virtualized infrastructure into the public cloud with VMware and our vCloud Powered service providers, the largest ecosystem of cloud computing partners. Leverage secure hybrid cloud resources with confidence while providing choice and flexibility, ensuring interoperability and portability of workloads between cloud environments with a VMware vCloud infrastructure built on VMware vSphere, VMware vCenter, VMware vCloud Director, and VMware vShield."
JeffHebert 060001UEQ2 Tags:  performance iaas ibm available cloud reliable saas paas scalable 6,337 Views
"Security often comes up as a big stopping point for cloud computing. One of the ways around this is to build a private cloud – one that remains within the corporate firewall and wholly controlled internally.
That was the approach taken by Los Alamos National Laboratory as it seeks to create an infrastructure on demand (IOD) architecture to simplify the rollout of new technology projects and to eliminate delays in storage, server and network provisioning.
Anil Karmel, IT manager at Los Alamos National Lab noted four tenets that played a major role in the private cloud decision:
• green IT
• streamlined operations
• rapid scaleup/down
“As we deploy more virtual servers, we consume far less power and also reduce electronic waste,” said Karmel. “We estimate eventual savings of $1.3 million annually due to IOD.”
Server capacity on demand is now achievable in a few clicks. Instead of 30 days to provision a server, it now takes less than 30 minutes.
The organization is utilizing HP c7000 blade enclosures along with HP Virtual Connect Fibre Channel/Flex 10 Ethernet. HP BL460c and BL490c blades are used, with each blade containing multiple quad-core and six-core chips.
A NetApp SAN was brought in to add storage capacity. This is based on the NetApp V Series with 2 PBs of Tier 2 SATA storage. Tier One is provided by existing HP arrays.
The cloud itself consists of four elements: a web portal at the front end; Microsoft SharePoint as the automation engine for cloud workflows, and also as the integration point for functions such as chargeback; VMware vCloud Director to manage and operate the cloud; and VMware vShield to provide security at both the application level and at the user device level.
“Any virtual environment has to be cost effective, so that means it has to be simple while being aware of any and all changes in real time,” said Karmel.
This is especially important in the security arena. Traditional security operates at the hardware or software layer. But the addition of a virtualization layer, said Karmel, provides too many gray areas for such security tools to operate effectively. Hence security itself is now being virtualized to eliminate yet another wave of security holes showing up in the corporate networks.
Using Infrastructure on Demand, the National Lab is creating virtual security enclaves using vShield that prevent one desktop or client from infecting others, and keeps virtual machines (VMs) out of harm’s way. Rules are set indicating access rights, as well as security protocols based on threat detection. Traditional security tools interface with this virtual security layer to keep servers and devices more protected. Any time a threat is detected, the offending virtual computer is sent to a remediation area, which has no network connectivity with which to propagate malware.
“This all occurs automatically based on preset policy,” said Karmel. “If a VM is moved from one host to another, the security policy given to it moves with it.”
To prevent VM sprawl, VMs are given an expiry data. This is one year by default, though that can be adjusted. 30 days before the due date, an email is automatically generated asking the VM owner about renewal.
Another similar email is relayed with 10 days left and then again the day before expiry. As soon as the VM is turned off, the user is informed of the fact and asked if he/she wants it back on line. Even then, 29 days later, the user is told that VM is scheduled for deletion. The next day it is deleted.
However, a backup is retained for seven years just in case. The NetApp storage is used to create snapshots of VMs before they are retired to tape. For now, restores are not automated. But in the next version of Infrastructure on Demand, users will be able to restore VMs they desire in a few clicks. “Lifecycle management of VMs is very important,” said Karmel.
The organization has erected a chargeback structure. Cloud resources are priced according to CPU, RAM and disk. Users can see the total cost before submitting a request for IT resources. Following a request, the line manger has to approve and accepts the charges to that unit.
“You have to build best practices around our workloads,” said Karmel. Service Level Agreements (SLAs) are set at four 9’s. If some hardware goes down and Infrastructure on Demand doesn’t meet the SLA, it doesn’t charge for that resource for that month. In addition, uptime and availability metrics are regularly published so users are fully informed.At the moment, separate network, security and virtual server teams are being maintained to monitor the infrastructure. Over time, this may be streamlined to one centralized unit."
-By the End of the Decade One in Four UK Power Stations are Set to Close and UK Gas Production is Expected to be Half of Current Levels, yet Demand for Electricity is Expected to Increase by More than 50 Per Cent by 2050
-This Collaboration is to Create a Flexible, Secure and Scalable Data and Communications Hub to Support the UK Government's Smart Meter Implementation Programme and its Strategy to Cut Emissions by 80 Per Cent by 2050
LONDON - 21 Mar 2011: IBM (NYSE: IBM) and Cable&Wireless Worldwide (LSE: CW.L), today jointly announce their collaboration to develop a new intelligent data and communications solution, UK Smart Energy Cloud, to support the UK's Smart Meter Implementation Programme, which aims to rollout more than 50 million smart meters in the UK.
UK Smart Energy Cloud has the potential to provide a complete overview of energy usage across the country and pave the way for easier implementation of a smart grid. The solution will utilise the extensive experience IBM has gained from leading and implementing smart grid programmes around the world and its proven enabling software and middleware. The solution will be supported by C&W Worldwide's extensive, secure next-generation network and communications integration capability.
There has never been a more challenging time for the energy industry with decisions being taken to protect the country's energy supply that will have significant implications for everyone in the UK. Both smart meters and the smart grid are significant steps on the journey to a new energy future, potentially changing for the better the way we consume and distribute energy.
JeffHebert 060001UEQ2 Tags:  scalability ibm iaas server storage reliability network saas cloud power paas performance 3,242 Views
The unprecedented interest and projected IT spend on cloud computing is coming from all types of organizations, businesses and governments that are seeking to transform the way they deliver IT services and improve workload optimization so they can quickly respond to changing business demands. Cloud computing can significantly reduce IT costs and complexities while improving asset utilization, workload optimization and service delivery.
Today’s IT Infrastructures face challenges on many levels:
As a result of these challenges, organizations are demanding an IT infrastructure and service delivery model that enables growth and innovation. An effective cloud computing environment built with IBM Power Systems™ cloud solutions helps organizations transform their data centers to meet these challenges:
Power Systems cloud solutions enable customers to build an effective cloud computing environment, enabling organizations to reduce IT costs, improve service delivery and enable business innovation.
ARMONK, N.Y. - 24 Mar 2011: IBM (NYSE: IBM) today launched new, cloud-based software designed to help marketers gain real-time, actionable insight from data available across social media channels.
The new software expands IBM's business analytics capabilities by enabling organizations to develop faster, more precise social media marketing programs that support their brand's total online presence through a cloud-based delivery model.
The first product, IBM Coremetrics Social, helps companies analyze the business impact of their social marketing initiatives, while IBM Unica Pivotal Veracity Email Optimization Suite analyzes email links that are shared across social network platforms, enabling marketers to better capitalize on opportunities across channels.
Today's news follows IBM's recent announcement of new software and the creation of a new consulting practice dedicated to the emerging category of "Smarter Commerce," which is focused on helping companies swiftly adapt to rising customer demands in today's digitally transformed marketplace. Smarter Commerce includes new cloud analytics software that enables companies to monitor their brand's presence in real-time through social media channels to better assess the effectiveness of new services and product offerings, fine tune marketing campaigns, and create sales initiatives in real-time.
"IBM's approach to social media analytics is based on the understanding that people interact with an organization's brand in a number of ways—including email, social networking sites and company Web sites—and the true measure of business impact demands a fully integrated view of the interaction with these resources," said John Squire, chief strategy officer, IBM Coremetrics. "The new social media analytics software unveiled today will help marketers develop more targeted, highly-measurable, and effective social media marketing campaigns."
IBM Coremetrics Social enables organizations across a wide range of industries to measure the effectiveness and return on investment (ROI) of their social marketing initiatives by gaining insight from data that's publicly available on social media websites.
This Smarter Commerce offering delivers real-time intelligence on the social media response to a particular brand, or the products, content and services being offered, and enables clients to make fact-based, accurate decisions about marketing expenditures. As a result, marketing teams can easily attribute business impact to social referrals in the context of other marketing programs.
Using the analytics foundation of the Coremetrics Continuous Optimization Platform™ and its complete suite of marketing optimization applications, IBM Coremetrics Social provides cross-channel reporting and benchmark capabilities to track and improve social marketing campaigns. With social benchmarking, brands can evaluate the effectiveness of their social initiatives relative to their peer companies, and understand where they excel, and where there is opportunity for improvement.
It has become routine for social networks to be used as a resource to broadly share links to special offers made available by companies via email. Well-known brands can expect to see as much as 38 percent of their special offer email links shared across social networks. An average of 28 percent of these links is then 'liked' or commented on.
The new IBM Unica Pivotal Veracity Email Optimization suite tracks and analyzes email links that are shared across social network platforms, delivering actionable insights which marketers can turn into recognizable profit. Unlike other technologies, this new offering opens the doors for marketers to identify, track, and improve the perception of their brands across channels. The Social Email Analytics software tracks all links associated with a marketer's brand and email, not just the intended links a marketer shares. This approach better encompasses and reflects the emerging complexities and ramifications of consumer interactions with brands, starting with email and ending up in the social realm. With this new software, marketers can also hone Web pages for social networks and better identify opportunities across channels.
For more information on IBM's Smarter Commerce initiative, please visit: http://www-03.ibm.com/press/us/en/presskit/33983.wss
For more information on Coremetrics, an IBM Company, please visit http://www.coremetrics.com/For more information on Unica, an IBM Company, please visit http://unica.com/
JeffHebert 060001UEQ2 Tags:  reform federal saas iaas government paas storage enterprise cloud servers it network 3,876 Views
Vivek Kundra has been an impact player. Since joining the Obama administration as the government’s first CIO, Kundra has been in constant motion, championing one initiative after another, including cloud computing, transparency, metrics and data center consolidation.
But in December 2010, Kundra got everyone’s attention — inside the Washington Beltway and beyond — when he rolled out the administration’s much-anticipated 25-point plan for reforming IT management. The initiative, which pulls together some ideas that have been floated before, provides an IT road map for the next two years. It focuses on shorter procurement cycles, better program management and improved government/industry communications.
Kundra got kudos for spearheading an extensive outreach effort that gave industry groups and agency stakeholders ample opportunity to weigh in on the plan.
Read more about the 2011 Federal 100 award winners.
* Press room * Press releases IBM Expands the Institute for Electronic Government in Washington to Focus on Advancements in Analytics and Cloud Computing
JeffHebert 060001UEQ2 Tags:  paas federal saas cloud strategy government state iaas ibm 3,005 Views
IBM Expands the Institute for Electronic Government in Washington to Focus on Advancements in Analytics and Cloud Computing
Virtual Collaboratory to Connect Thousands of Government Leaders Globally
WASHINGTON - 01 Mar 2011: IBM (NYSE: IBM) today announced a major expansion of its Institute for Electronic Government (IEG) in Washington, D.C., adding cloud computing and analytics capabilities for public sector organizations around the world.
IBM has moved and expanded the facility in order to meet the growing demand from Government, Health Care and Education leaders who recognize the potential of cloud computing environments and business analytics technologies to improve efficiencies, reduce costs and tackle energy and budget challenges.
According to recent IBM surveys of technology leaders globally, 83 percent of respondents identified business analytics -- the ability to see patterns in vast amounts of data and extract actionable insights -- as a top priority and a way in which they plan to enhance their competitiveness. In addition, an overwhelming majority of respondents -- 91 percent -- expect cloud computing to overtake on-premise computing as the primary IT delivery model by 2015.
The institute provides insights and expertise on emerging technology solutions, drawing on IBM researchers, experts in advanced software platforms, and consultants with deep industry knowledge in areas such as government, health care, transportation, social services, public safety, customs and border management, revenue management, defense, logistics, and education. Read More>
ARMONK, N.Y. & BENGALURU, India - 04 Mar 2011: Today IBM (NYSE: IBM) and The Karnataka Vocational Training and Skill Development Corporation (KVTSDC), an organization within the Department of Labour in India's fastest growing state, announced a new partnership to help millions of citizens find work using their mobile devices. Once created, this technology could be applied in emerging economies around the world.
The World Wide Web has provided unfettered access to information, opened new business and employment opportunities, transformed the way we communicate, helped eliminate geographical barriers and paved the way for global collaboration and integration. But in many of the world's most rapidly growing economies, there is a lack of affordable access to personal computers and the Internet – and in rural areas in particular, widespread illiteracy compounds this gap.
Today in India only 7 percent of the population has access to the Web, but at the same time mobile phones and services are becoming increasingly affordable and reliable, creating the emergence of a Mobile Web and opening the door for citizens to access important government services through their phones.Read more>
JeffHebert 060001UEQ2 Tags:  paas center available smarter data cloud saas scalable planet iaas reliable ibm 3,608 Views
-By the End of the Decade One in Four UK Power Stations are Set to Close and UK Gas Production is Expected to be Half of Current Levels, yet Demand for Electricity is Expected to Increase by More than 50 Per Cent by 2050
-This Collaboration is to Create a Flexible, Secure and Scalable Data and Communications Hub to Support the UK Government's Smart Meter Implementation Programme and its Strategy to Cut Emissions by 80 Per Cent by 2050
LONDON - 21 Mar 2011: IBM (NYSE: IBM) and Cable&Wireless Worldwide (LSE: CW.L), today jointly announce their collaboration to develop a new intelligent data and communications solution, UK Smart Energy Cloud, to support the UK's Smart Meter Implementation Programme, which aims to rollout more than 50 million smart meters in the UK.
UK Smart Energy Cloud has the potential to provide a complete overview of energy usage across the country and pave the way for easier implementation of a smart grid. The solution will utilise the extensive experience IBM has gained from leading and implementing smart grid programmes around the world and its proven enabling software and middleware. The solution will be supported by C&W Worldwide's extensive, secure next-generation network and communications integration capability.Read More>
JeffHebert 060001UEQ2 Tags:  emc saas hds virtualize cloud paas iaas ibm netapp hp 5,464 Views
ARMONK, N.Y., - 08 Dec 2010: IBM (NYSE: IBM) today announced the availability of new online software services based on the same on-premise solutions used by clients today – now delivered as a monthly subscription offering - that enables better automation and control of IT Service Desk functions. This new service adds to IBM's software-as-a-service offerings that help automate a range of IT services critical to maintaining business operations.
Even small and mid-size companies deal with labor-intensive services for employees such as resolving IT issues, fixing laptops and onboarding new hires. Many companies struggle with slow, inefficient service request handling because at the core their networking, facilities, application support and IT assets aren't integrated and typically depend on manual updates. For example, IBM estimates that only five percent of service and support issues are resolved by self-service, making automation and integration crucial for service management.Learn More>
JeffHebert 060001UEQ2 Tags:  disk dedupe ibm paas nas virtualize cloud iaas storage san saas 3,644 Views
IBM offers three types of cloud solutions, for storage and other services: Smart Business on the IBM Cloud, Smart Business Cloud services, and Smart Business Systems.
− Smart Business on the IBM Cloud are standardized services provided by IBM on a pay-per-use basis.
− Smart Business Cloud services are private cloud services, behind your firewall, built and/or run by IBM
- Smart Business Systems are purpose-built, integrated Service Delivery Platform solutions
IBM also offers cloud consulting to help plan and convert applications to the cloud model.
JeffHebert 060001UEQ2 Tags:  cloud emc hp storage servers ibm network virtualization 3,345 Views
IBM expands its virtualization, image management and cloud computing leadership with major technology breakthroughs
LAS VEGAS, - 01 Mar 2011: PULSE 2011 -- IBM (NYSE: IBM) today showcased a series of technology breakthroughs that extend its leadership capabilities in virtualization, image management and cloud computing, including software that can virtualize a data center within minutes to instantly meet business demand.
These new technologies build on IBM's existing provisioning and image deployment capabilities that help clients better manage virtualized cloud environments to achieve greater business efficiency, agility and innovation while controlling costs.
According to IDC, $17 billion was spent on cloud-related technologies, hardware and software in 2009. IDC expects that spending will grow to $45 billion by 2013.(1)
The demand for cloud computing is exploding as organizations seek to expand the impact of IT to deliver new and innovative services while realizing significant economies of scale. The power of the cloud computing model is the ability to harness varying technology investments by enabling rapid and dynamic scheduling, provisioning and management of virtualized computing resources on demand.
IBM has helped thousands of clients adopt cloud models and manages millions of cloud based transactions every day in areas as diverse as banking, communications, healthcare and government, and securely tap into IBM cloud-based business and infrastructure services. By offering proven solutions to accelerate the deployment of advanced infrastructure virtualization with capabilities to visualize, control, and automate these infrastructures, IBM helps global organizations optimize their ROI from technology.Read More>
Chapter 12 - Cloud Users & Roles
There are several actors typically involved in cloud solutions from a business perspective. Their roles and responsibilities and their relationships with other actors would vary based on the industry. The business actors responsibilities is to make appropriate cloud investment decisions. Once an organization has started with cloud, then are some typical actors that are involved in the day to day operational consumption and provision of cloud services. This chapter is more focused on the latter and not on the business actors which typically includes the people like CIO/CTO/COO, Business Operations Controller as well as Procurement Managers.
Following are some of the key organizations that are typically involved in a cloud solution. The actors and roles are then defined for users under each of these key organizations.
Cloud Service Consumer: The service consumer is the end user or enterprise that actually uses the cloud service.
Cloud Service Provider: The service provider delivers the service to the consumer.
Cloud Service Creator / Developer: The service developer creates and publishes the cloud service.
These provider organizations, the typical roles and their associated activities is discussed in detail in the Cloud Use Cases Whitepaper and Dave Russell has an open thread on Cloud Computing Central to discuss these in detail.
Out of all the roles across all these organizations, the key roles from an implementation and operation perspective are the following.
Cloud Administrator who can perform the following tasks:
Cloud User who can perform the following tasks:
Accordingly Tivoli Service Automation Manager provides two different user interface for these two different and key roles for the cloud – An administrative User Interface and a self-service user Interface. Find details here.
There are variations of these two roles depending on the Cloud Provider and Consumer Organization design. These are roles like
Team Administrator role can perform the tasks for a group of users like creating and maintaining user accounts as well as placing requests on behalf of the project.
These business specific roles then need to be mapped to application roles like Service Administrator, Service Definition Designer/Manager, Service Deployment Operator and Manager, etc. The security framework implementation should take care of these roles mapping. The security function of Tivoli Service Automation manager enables to manage which users can log into the user interface and which applications each user can access. The broader discussion on security specifically authentication followed by authorization shall be discussed as a separate chapter.
RHyman 06000032P4 Tags:  #ibmpartners cloud-specialty ibmcloud iaas cloud-services developerworks business-partners cloud-computing ibmpartners saas cloud ibmontwitter cloud_computing specialty #ibmcloud 5,941 Views
Today IBM announced new cloud computing initiatives for Business Partners. One called the IBM Cloud Computing Specialty - a single program to develop the IT industry's broadest ecosystem of companies working together to provide a wide range of cloud computing services and technologies for clients of all sizes and industries. The second, the IBM Software Value Plus Cloud Computing Authorization for software resellers.
Both these initiatives are complementary. IBM Business Partners with an SVP Cloud
Authorization will have completed the IBM Software skills required for the Cloud Specialty. While the IBM Cloud Specialty focuses on the
development and promotion of top cloud Business Partners, the new authorization is an
extension of the IBM Software Value Plus program, specifically for IBM software Business Partners that have built and demonstrated specialty skills, and then
receive financial incentives as resellers of IBM's software portfolio.
You may recall the recent IBM developerWorks survey of more than 2,000 IT professionals worldwide showed 91 percent believe cloud computing will overtake on-premise computing as the primary way organizations acquire by 2015. Industry analysts have also said that the cloud opportunity is expected to more than double in the next few years.
And IBM developerWorks continues to be committed to being your source for the technical resources to build your cloud skills to ensure you can participate in the coming opportunities. The Cloud zone on IBM developerWorks offers the ability to collaborate with peers to solve your development issues and excel with cloud computing so that you can be in lock step with the new opportunities that are expected to arise with the growing cloud computing opportunity.
It's a exciting space, grow your knowledge to participate in the smarter planet.
Chapter 11 – Self Service Portal & Service Catalog
One of the key aspects of cloud service management is the automation to ensure that you can manage huge and growing infrastructures while controlling cost and quality. To attain this goal, we need a Self Service Portal and a Service Catalog. Results show that with these components in place the wait time for services have decreased by an average 98%.
Traditional processes would require you to fill out a paper and put it through the approval processes. Finally the capex is approved and the order is placed for the hardware and software. Also you will be required to constantly followup with the IT Provider teams to know the status of the hardware/software availability, their installation and provisioning, etc. Most often even if all the details are provided correctly upfront, there are chances of errors in the hardware and software provisioning as the process is manual.
With the Self-Service Portal these requests and their tracking are automated. You can track the status of the workflow Online. Ask for services when you need them and most of it is provisioned automatically through workflows implemented. There is less chance for error and faster provisioning with Self-Service Portal and the automation.
Thus the Self-Service GUI allows end users to request IT Resources and optionally automatically fulfill that request.
Tivoli Service Automation Manager provides a set of pre-defined services for Virtual Server Management. These are available as part of a service catalog that is accessible to end user through the Self-Service UI. The Self-Service Virtual Server Management functionality addresses a long-standing need by data centers to efficiently manage the self-service deployment of virtual servers and associated software. Using a set of simple, point-and-click tools, an end user can select a software stack and have the software automatically installed or uninstalled in a virtual host that is automatically provisioned.
These tools integrate with IBM Tivoli Service Request Manager to provide a self-service portal for reserving, provisioning, recycling, and modifying virtual servers, and working with server images, in the following platform environments in a virtualized non-production lab (VNPL). This functionality ensures the integrity of fulfillment operations that involve a wide range of resource actions.
These capabilities enable you to achieve incremental value by adopting a self-service virtual server provisioning process, growing and adapting the process at your own pace, and adding task automation to further reduce labor costs around defined provisioning needs.
Before users in the data center can create and provision virtual servers, administrators perform a set of setup tasks, including configuring the integration; setting up the virtualization environments managed by the various hypervisors and running a Tivoli Provisioning Manager discovery to discover servers and images across the data center.
After this initial setup has been completed, the administrator associates the virtual server offerings with Tivoli Provisioning Manager virtual server templates. In addition, the Image Library is used as the source for software images to be used in provisioning the virtual servers.
Data center users who have Cloud Admin rights can use the Service Automation Manager Offering Catalog application to create and provision virtual server deployments.
The Offering Catalog application contains all the offerings that are available to the end user. There are steps that you need to perform on the catalog that will make specific offerings visible to specific end user groups. The end user interface is a Web 2.0 interface which can be edited to expose it via a Service Catalog. The Web 2.0 UI is designed in an extensible, modular way that allows for programmatically extending it.
Tivoli Service Automation Manager defines security groups that are used to provide role-based functions that can be performed via the administrative user interface or the self-service user interface. We will discuss the User access management for the Self-Service Virtual Server Provisioning component in the next chapter.
Chapter 10 – Cloud Service Design using Tivoli Service Automation Manager
When we are building a solution for a certain kind of IT service, the design should cover two important parts.
Tivoli Service Automation manager support both these models and concepts that are aligned around the ITSM service life-cycle.
The structural model describes how the service to be managed looks like while the operational model defines what processes can be executed on the service. The structural model of the Tivoli service automation Manager defines all the components that make up a service as well as their relationships between each other.
The Service Topology application allows the representation of the service in terms of hardware servers and their associated software. The primary data the Service Topology application operates on are topology and topology node objects. The application provides a means for viewing and editing the same.
The operational model defines all the management processes that can be run on the service described by the structural mode in particular the processes that are subject to automation. This is done as a process model for a service typically that contains process templates which can be instantiated for various stages of a service's life cycle including creation, modification of a deployed service, etc. Each of the processes defined in the process model – Tivoli Service Automation Manager uses the term Management Plans – which is basically a definition of a sequence of tasks performed on the service's components aimed at achieving a certain management goal. Each management plan represents a specific process or action to be taken with respect to an instance of a service definition. The Management Plan also provides means for describing where input data for each task comes from, and where output data of a task shall be stored for further processing.
Service Definitions are used to capture the design of a service both from a structural point
of view and from a process-centered point of view. Upon an end-user request, new Service Deployment Instances can be built based on the model captured in the respective Service Definitions. Those Service Deployment Instances are used by Tivoli Service Automation Manager to deploy and manage services in the real world.
Finally once the design of a service being automated is completed, offerings can be created and published into Service Catalogs. Services implemented in Tivoli Service Automation Manager can be exposed for end-users so they are accessible in an easy way, based on the notion of service catalogs and service offerings.
Chapter 9 – Cloud Service DesignOnce you have installed and setup your management platform, we are ready to start with designing and delivering the cloud services using the platform.
SOA & CloudWe use the same principles of Service Oriented Modeling and Architecture (SOMA) that links business intent with its realization through IT for Cloud Services modeling as well. In SOA, we use the business process models to understand a series of sequentially organized business activities, events that trigger them, roles that perform them, inputs, outputs, control points, etc… As discussed in the Service Strategy section, we look to design the Cloud Services which are better aligned to business requirements
As in SOA, for service identification and design one could take any of the following approach.
In a top-down approach development generally usually starts with a high-level business and structural modeling of the service. Then you also define the management processes that are required service to be in operation. The top-down approach is further characterized in that no or only few automation or fulfillment assets exist when starting with the solution design. Design and implementation of those assets, including their interface and granularity, will be driven primarily from the high-level automation model. The advantage of the top-down approach is a clear design of the service to be automated, including the structural and operational model.
The bottom-up approach is usually characterized by a large number of automation assets that already exist. This may be in the form of many scripts or workflows already exists. In bottom-up approach, we take these low level assets and abstracting them as a cloud service.
Practically we might go with a combination of both approaches mentioned above as the meet-in-the-middle approach.
We model the service so we could learn, capture, and abstract details about “things,” their structures, relationships between them and, often, their behaviors (collaborations, states). All the factors that we consider during modeling a service in SOA are very much applicable for a cloud service too. These include but not limited to
The ABCs of Service Design for Clouds by David Linthicum is good article which discusses where SOA meets Cloud.
Service Management & Cloud
Now lets discuss the same from the Service Management / ITIL perspective. Cloud services have a lifecycle that maps to this service management lifecycle.
The Service Design phase includes the service definition, creation of the service and registering the same into a catalog. We will look at how these can be done using Tivoli Service Automation Manager in the next Chapter.
Service Design is a critical step that delivers the following benefits
Chapter 8 – Cloud Service Strategy
As discussed in Chapter 5, IBM Integrated Service Management provides the software, systems, best practices and expertise needed to manage infrastructure, people and processes—across the entire service chain—in the data center, across design and delivery, and tailored for specific industry requirements. The Service Management Goals are the following
These principles and goals are the same for Cloud Service Management as well. End to End Service Management includes the following steps.
Cloud Maturity and Readiness
Cloud Service Strategy is mainly about deciding what services do we want to deliver and how do we ensure competitiveness of providing the same through cloud. Today’s clients are seeking to utilize their assets to enable business innovation. The service strategy is all about choosing from across multiple compute / deployment models. We needed to access current IT infrastructure and need to identify and evaluate the set of capabilities for their readiness to move to cloud.
Selecting between the Cloud Deployment Models
For mission critical workloads that drive business innovation a private cloud is preferred. For secondary workloads and supporting business functions a public cloud is suitable. While public cloud delivers select set of standardized business process, application and/or infrastructure services on a flexible price per use basis focused on utility, the private cloud drives efficiency, standardization and best practices while retaining greater customization and control with focus on innovation.
When doing Service Strategy, you need to consider the expertise across industries and standards. At this Service Strategy phase, we normally consider reusing/leveraging solutions based on industry best practices including ITIL, CoBIT, eTOM, and ISO.
Calculating the ROI
Cloud Computing ROI is the important consideration/step during the Service Strategy phase. This includes you verifying the following fundamental aspects related to making a service available on the cloud.
There are several ROI frameworks and methods available that allows you validate the approach/strategy against these three fundamental aspects. Most of the service companies would have their own frameworks which are typically Intellectual Capital of their service teams.
Choosing the right Delivery Models and Workloads
Based on the Enterprise Architecture approach, we need to choose from the many available options of delivery models and work load. This includes the services and consulting engagement to obtain clarity on business drivers (business vision, strategy, timeline, business model, and business operating model) and how they can leverage technology and value enablers from cloud computing. Then in this cycle you also need to identify the right set of workloads to move to cloud that fetches maximum benefits from cloud computing. The flexibility that the business operating model gets to innovate on the business model is another key consideration. This could be iterative effort of identifying candidates and then slowly moving them to production.
One of the biggest challenges to utilize cloud computing in your organization is where to start and how to focus your efforts. IBM provides a Cloud Adoption Advisor to get started on the topic. The opengroup has also published a whitepaper on building return on investment on cloud computing.
Key Benefits from Service Strategy
Chapter 7 - IBM Tivoli Service Automation Manger – Architecture Overview
Each of the integrated capabilities required to implement service management for the cloud is provided by IBM Service Automation Manager (referred as TSAM in this chapter). TSAM supports the cloud through all the phases of the entire service lifecycle. The steps include
For supporting these phases it provides the following capabilities.
Each of these capabilities are delivered by discrete components within TSAM
A quick view of the architecture will help you understand that how these capabilities are provided by seamlessly by multiple components underneath TSAM.
Figure 1 Tivoli Service Automation Manager - Architecture Overview
Below are the key components and responsibilities
Tivoli Service Request Manager
Tivoli Service Automation Manager (Service Design)
Tivoli Provisioning Manager
Even though I would like to go into details on each component as part of this post, I'm not going to do so because as discussed in the initial post, the objective of this blog is to
provide the reader with the pointers to the content they need and not to repeat the same already
available elsewhere. So you can read more about the TSAM
Architecture on the TSAM
wiki on developerworks.
I’m including the list of software bundles for TSAM 7.2.1 to get a better understanding of the components involved.
Again, the TSAM infocenter provides more details on each of the typical hardware and software requirements and the related topics.
ChrisWard 060000JAAY Tags:  jukebox bluegene tcp broadband american-dream movies open-standards opportunity cloud linux 2 Comments 6,356 Views
So I'm sitting here with a rack of BlueGene/P . 4 racks, actually, and it's on a different continent, but I still have the use of it. And I'm wondering what it can do..
One rack can drive enough data over its external fiber-optics (TCP on ethernet) to support 30,000 domestic broadband video-on-demand links. About 270 Gbit/second. And it has enough memory to hold 1000 movies. You could make a movie rental vending machine; after a couple of hours, load up another 1000 movies and try to rent them. Rinse and repeat, for as long as there is a market.
But IBM isn't going to make movies (that's Disney, and others). And IBM isnt't going to operate the 'last mile' network to the home (that's ATT, and others). IBM is going to manufacture and market the reliable, high-performance servers that you need as one of the vital components of the solution.
So come on, partner businesses. There is opportunity here (I think ... I am just the Scientist aroundhere ... who's the Market Researcher ?). Shall we take it ? Does it make the world a better place if we do ?
A place where there's room to grow. Opportunity. That's the American Dream. Who's coming ?
Cloud developers: Register for a can't-miss virtual event in October to learn indispensible techniques to prepare for the smarter planet
RHyman 06000032P4 Tags:  smart virtual-event cloud training testing test webcast paas cloud-services development technical workshop clouds ibm bpass demos #ibmcloud skills ibmontwitter saas iaas ibmcloud software-as-a-service business cloud_computing & cloud-computing 2 Comments 9,027 Views
If you haven’t signed up yet, be sure to check out the October cloud computing for developers virtual event. Participants in this two-day event will learn how to leverage the power of the cloud to tackle the toughest business and technical challenges! This two-day event will be packed with real-world examples and live demos of techniques and products – and you’ll see it all without leaving your desk. It's going to be exciting to have you all there with us getting smarter learning new technical skills to prepare us all for a smarter planet.
Here's some of what's in plan for the event. Remember that you can ask as many questions as you wish to our team of experts about any of our sessions.
That's not all folks, remember we have a full set of sessions on the 2nd day to. Remember, you'll have to register separately for day 2.
We are giving you a choice. Choose the 2-day event best suited to you depending on where you are in the world. Both events will have very similar sessions. Register for the event that is best timed for North American (October 12-13) or European (October 26-27) time zones.
Visit the IBM Cloud for developers group to view the agenda and session descriptions, or register here.
We are looking forward to learning with you so join us this month to get a little smarter.
Packt Publishing is planning to advance its line up of Cloud Computing books and is currently inviting book ideas and potential authors interested in writing IBM Cloud Computing to get in touch.
You are not required to be an experienced author to write for Packt. If you have a good knowledge about your subject, a passion to share it with others and can communicate clearly in English, you could be Packt's next author.
Some of the topics which immediately interest Packt are:
The list of topics here is not exhaustive. If you think there are other topics on which readers require the book, Packt would love to hear about them.
More details about this can be found at: http://authors.packtpub.com/content/packt-invites-ibm-cloud-computing-authors-and-book-ideas
So, if you love Cloud Computing and fancy writing a book, send in your book ideas to firstname.lastname@example.org. If you don't have a book idea and are simply interested in writing a book, Packt is still keen to hear from you.
Now that we have discussed, what’s cloud, its different delivery and deployment model as well as its benefits, let us look at what steps enterprises should take to get on the cloud.
The first obvious step for the traditional IT is to consolidate the physical infrastructure. IT service providers are facing enormous challenges and rising costs in managing and operating a growing number of heterogeneous system and computing landscapes. It is not about managing just the hardware and software but also things like the power, cooling, location and many other resources that are required to run a data centre. The cost of managing all of these have forced traditional computing environments to reorganize computing resources within the data centre, as continuous growth leads to an inefficient utilization of the underlying hardware and software infrastructure. Reducing this management cost by server and Infrastructure consolidation and better utilization is one of the first steps towards transforming corporate data centres into compute clouds.
Gartner provides this interesting material for further details on IT Infrastructure Consolidation. IBM’s offerings in this space all the way from strategy and planning to consolidation of all the resources can be found here.Even after consolidation, the IT infrastructure or a compute resource is
Virtualization is the next logical step to cloud that can enable flexibility. Virtualization provides infrastructure abstraction and the flexibility to give out soft configured IT resources that can be recycled / reused with ease. This increases the utilization of the resources and at the same time makes the data centre energy efficient. Virtualization is the creation of a virtual (rather than actual) version of something, such as an operating system, a server, a storage device or network resources. There are three areas of IT where virtualization is making head roads, network virtualization, storage virtualization and server virtualization. You can read more about it in this article. We will discuss each of this virtualization in detail in our subsequent lessons.
Even with virtualization there exists many configurations that are constrained and cannot be effectively managed. So it is important from the data centre perspective to standardize the IT service and offerings that it can provide to the IT consumers. The 80-20 rule applies here. We need to look for what configuration / services are mostly used that account for 80% of the IT service requests and then standardize them for better management. With standardization, the biggest outcome is simplification. We will have only few configurations to automate and support.
Even after standardization, many of the traditional IT process remain highly manual intensive, skill dependent, error prone and costly. The next major step towards cloud and that provides a huge ROI on cloud investment is automation. This step would remove human involvement to a large extent and provides rapid deployment and management of IT services. This would be a repeatable configuration/activity that also improves compliance.
The final steps to cloud comes from being able to optimally sharing the resources among common workload profiles and be able to dynamically provisioning of resources based on demand. This is possible only by providing a self-service User Interface for the IT Service Consumer. Such self-service capability gives complete control to the user as well as the he will have a wide choice of cost and usage models that can be select based on the requirement. Apart from these features the cloud environment should provide increased visibility and required IT/Business alignment.
Service management system is the capability that can provide this visibility, control and automation needed for efficient cloud delivery in both public and private implementations. Many of the aspects like Simplifying user interaction with IT, User friendly self-service interface, Service catalog, Automated provisioning & de-provisioning, increase system administrator productivity are provided by the Service management system. In fact 81%* of cloud payback is driven by labour savings enabled by service management. We will discuss service management in detail in the next chapter.
Can anyone recommend some reading material that would be good as a starting point for someone who knows absolutely nothing about CLOUD computing, but would like to learn?