Cloud Computing Central
I'll make no bones about the fact that I'm a huge fan of Cloud Foundry. It's the right play, by the right people at the right time. Despite all the attempts to dilute the message over the last eleven years, Platform as a Service (or what was originally called Framework as a Service) is about write code, write data and consume services. All the other bits from containers to the management of such are red herrings. They maybe useful subsystems but they miss the point which is the necessity for constraint.
Constraint (i.e. the limitation of choice) enables innovation and the major problem we have with building at speed is almost always duplication or yak shaving. Not only do we repeat common tasks to deploy an application but most of our code is endlessly rewritten throughout the world. How many times in your coding life have you written a method to add a new user or to extract consumer data? How many times do you think others have done the same thing? How many times are not only functions but entire applications repeated endlessly between corporate's or governments? The overwhelming majority of the stuff we write is yak shaving and I would be honestly surprised if more than 0.1% of what we write is actually unique.
Now whilst Cloud Foundry has been doing an excellent job of getting rid of some of the yak shaving, in the same way that Amazon kicked off the removal of infrastructure yak shaving - for most of us, unboxing servers, racking them and wiring up networks is a thankfully an irrelevant thing of the past - there is much more to be done. There are some future steps that I believe that Cloud Foundry needs to take and fortunately the momentum is such behind it that I'm confident of talking about them here without giving a competitor any advantage.
First, it needs to create that competitive market of Cloud Foundry providers. Fortunately this is exactly what it is helping to do. That market must also be focused on differentiation by price and quality of service and not the dreaded differentiation by feature (a surefire way to create a collective prisoner dilemma and sink a project in a utility world). This is all happening and it's glorious.
Second, it needs to increasingly leave the past ideas of infrastructure behind and by that I mean containers as well. The focus needs to be server less i.e. you write code, you write data and you consume services. Everything else needs to be buried as a subsystem. I know analysts run around going "is it using docker?" but that's because many analysts are halfwits who like to gabble on about stuff that doesn't matter. It's irrelevant. That's not the same as saying Docker is not important, it has huge potential as an invisible subsystem.
Fourth, and most importantly, it needs to tackle yak shaving at the coding level. The simplest way to do this is to provide a CPAN like repository which can include individual functions as well as entire applications (hint. Github probably isn't upto this). One of the biggest lies of object orientated design was code re-use. This never happened (or rarely did) because no communication mechanism existed to actually share code. CPAN (in the Perl world) helped (imperfectly) to solve that problem. Cloud Foundry needs exactly the same thing. When I'm writing a system, if I need a customer object, then ideally I should just be able to pull in the entire object and functions related to this from a CPAN like library because lets face it, how many times should I really have to write a postcode lookup function?
But shouldn't things like postcode lookup be provided as a service? Yes! And that's the beauty.
By monitoring a CPAN like library you can quickly discover (simply by examining meta data such as downloads, changes) as to what functions are commonly being used and have become stable. These are all candidates for standard services to be provided into Cloud Foundry and offered by the CF providers. Your CPAN environment is actually a sensing engine for future services and you can use an ILC like model to exploit this. The bigger the ecosystem is, the more powerful it will become.
I would be shocked if Amazon isn't already using Lambda and the API gateway to identify future "services" and Cloud Foundry shouldn't hesitate to press any advantage here. This process will also create a virtuous cycle as new things which people develop that are shared in the CPAN library will over time become stable, widespread and provided as services enabling other people to more quickly develop new things. This concept of sharing code and combing a collaborative effort of the entire ecosystem was a central part of the Zimki play and it's as relevant today as it was then. By the way, try doing that with containers. Hint, they are way too low level and your only hope is through constraint such as that provided in the manufacture of uni-kernels.
There is a battle here because if Cloud Foundry doesn't exploit the ecosystem and AWS plays its normal game then it could run away with the show. The danger of this seems slight at the moment (but it will grow) because of the momentum with Cloud Foundry and because of the people running the show. Get this right and we will live in a world where not only do I have portability between providers but when I come to code my novel idea for my next great something then I'll discover that 99% of the code has already been done by others. I'll mostly need to stitch all the right services and functions together and add a bit extra.
Oh, but that's not possible is it? In 2006, Tom Inssam wrote for me and released live to the web a new style of wiki (with client side preview) in under an hour using Zimki. I wrote an internet mood map and basic trading application in a couple of days. Yes, this is very possible. I know, I experienced it and this isn't 2006, this is 2016!
Cloud Foundry (with a bit of luck) might finally release the world from the endless Yak shaving we have to endure in IT. It might make the lie of object re-use finally come true. The potential of the platform space is vastly more than most suspect and almost everything, and I do mean everything will be rewritten to run on it.
I look forward to the day that most Yaks come pre-shaved. For more read....
Microservice architecture resembles a Service Oriented architecture in the part that both rely on cohesive
loosely coupled services, strung together to provide a solution. Beyond this similarity, the common nature
of the architecture seems to end. Microservice architetcure consists of completely decoupled services orchestrated
with each other via REST+HTTP API interfaces. These services can each be running in its own environment including
different programming languages. Each service can have a different deployment/management cycle while keeping the
final solution consistent.
Docker is a technology that is naturally suited for building an Application with a Microservice architecture.
You can visualize Docker as a wafer thin Linux VM that can host multiple containers without a tight dependency
on the host Operating System. Compared to a regular VM, it is light weight and more manageable. You can have
Microservices loaded into individual containers each completely isolated in environment from each other.
Besides Isolation, Docker also provides a consistent environment between code movement from Development -> QA
-> Production. Developers can have development time Docker containers that run on minimal hardware resources
and have the same code deployed consistently in maximized Production environments including Cloud and PAAS/IAAS
infrastructures. Same code, different scale!!!
With that in mind, I would like to cover the implementation details associated with developing Java Microservices
in a Docker environment.
This step involves developing the Java Microservice. Keeping the scope of the blog in mind, this microservice can be downloaded and run with the following instructions
Implementation details about the Microservice can be studied in the source code by loading the project into your preferred Java IDE such as Eclipse.
Before the Microservice can be run inside Docker, the Docker technology must be installed on your local machine. You can follow step-by-step Docker installation procedure at: Docker Installation
Once Docker is installed correctly, you can test your installation using the following command:
Create a Microservice Docker Image
In the Docker ecosystem, there are two main concepts to understand.
For the above microservice, the container loads the microservice image, and as part of this image it not only loads the Application Code for the microservice, but also the Java 8 environment it needs to run the microservice.
But, before you can load the microservice into Docker, you need to create a Docker image for that software. The steps to create the image are as follows:
The Docker build process uses a file named Dockerfile to get its instructions about what to do when building an image. In this particular microservice, the Dockerfile instructs the Docker system to download an image called 'java:8'. This is the core infrastructure needed to run the microservice. Next it adds the microservice jar and configuration to the image. And later, it exposes the ports 9000 and 9001 to service the requests.
docker build -t hello-microservice-local . (is the command that processes the Dockerfile and produces the hello-microservice-local image)
Note: make sure this command is issued from the Docker session and not just any command line session.
Once this Java Microservice Docker image is created, it must be run inside a Docker container using the following command:
You can test the Microservice in the browser using: http://localhost:9000/java/microservice
Once this works, you can stop the microservice using: docker stop hello-microservice-local
Publish the Microservice Docker Image
Now that you have the Microservice Docker Image working locally, you can publish this image to DockerHub to share with your team. This can be accomplished as follows:
Steps to post your local image to the remote repository
Before testing the remote image, you need to delete the local images. Get the image id for both 'hello-microservice-local' and 'hello-microservice-remote' using: docker images
and remove the two images using the command: docker rmi -f imageid
Once, the images are removed, you can test the remote image using the following command:
aairom 120000DAGK 1,730 Views
Ecosystem development cloud France will receive IBM business partners on March 5th 2015.
All information is available on the following site: http://www-01.ibm.com/software/fr/channel/KO_BP2015/
The agenda in described here: http://www-01.ibm.com/software/fr/channel/KO_BP2015/agenda.html and the workshops here: http://www-01.ibm.com/software/fr/channel/KO_BP2015/ateliers.html
We will be happy to welcome you for face to face discussions.
Alain Airom (cloud solution architect).
With the recent exploration of cloud computing technologies, organizations are using cloud service models like infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS) along with cloud deployment models (public, private and hybrid) to deploy their applications.
DRussell4881 12000070EV 2,266 Views
Come to the first Cloud Foundry Meetup in the Waltham area this coming Wednesday, December 11th!
DRussell4881 12000070EV 2,090 Views
Managing software and product lifecycle integration has always been a challenge and with the rate of the new demands on the enterprise the challenges are increasing. Leaders from different standards organizations and industry will lead interactive discussions on the importance of open technologies to help enterprises manage the lifecycle activities within their environments. Learn about the direction lifecycle integration is taking as a result of the inclusion of open standards and the importance of this work to you. You will also hear how you can bring forward your requirements and influence the supporting work activities.
cynthyap 110000GC4C Tags:  cloud-monitoring cloud-provisioning virtualization cloud-cost-management vmware cloud-computing cloud 3,253 Views
The challenges of virtualized environments are driving the shift to greater integration of service management capabilities such as image and patch management, high-scale provisioning, monitoring, storage and security. Join us for this webcast to learn how organizations can realize the full benefits of virtualization to reduce management costs, decrease deployment time, increase visibility into performance and maximize utilization.If you're in North America, register here for the April 16th session: http://bit.ly/Y1X32g
If you're in Asia Pacific, register for the April 23rd session: http://bit.ly/1632q2Q
cynthyap 110000GC4C Tags:  cloud virtualization cloud-cost-management cloud_computing cloud-computing 3,822 Views
Even though server proliferation can be partially addressed through virtualization, the usage of virtual and physical assets becomes complex to accurately assess or manage. Cost management is crucial to integrate into overall service management, especially with a move into cloud. This webcast discusses how to implement a financial management roadmap and the key requirements for cloud transparency-- the ability to allocate IT costs, usage, and value.
Register today: http://bit.ly/VXXxl3
SteveCurtis 060000QGXC 2,856 Views
As a result of feedback from SmartCloud Enterprise customers and business partners, IBM is rolling out new enhancements this week.*
In addition to the availability of IBM SmartCloud Application Services, IBM’s platform-as-a-service offering, new and enhanced capabilities for IBM SmartCloud Enterprise include:
All the details of each new capability/enhancement can be found on the SCE portal in the “What’s New in SmartCloud Enterprise 2.2” document (SCE account sign-in is required to review the document), but here are a few highlights:
IBM SmartCloud Application Services (SCAS)
IBM’s platform as a service -- IBM SmartCloud Application Services -- runs on top of and deploys virtual resources to IBM SmartCloud Enterprise. SmartCloud Application Services delivers a secure, automated, cloud-based environment that supports the full lifecycle of accelerated application development, deployment and delivery. SCAS provides an enterprise-class infrastructure, enhanced security and pay-per-use, and allows clients to differentiate themselves with built-in flexible options that configure cloud their way – leading to a competitive advantage.
You can find the SmartCloud Application Services offering on the “Service Instance” tab within your SmartCloud Enterprise account.
Windows Instance Capture
As a direct result of client requests, we are offering additional flexibility and choice in Windows instance capture. Clients can now use the “Save private image” function with or without the use of Sysprep, the Microsoft System Preparation tool.
We invite you to learn more about all of these enhancements via the documentation library in the SCE portal and welcome your feedback. Thank you for your continued support!
* IBM will roll out these new capabilities in waves beginning mid-December 2012. IBM’s platform as a service offering, IBM SmartCloud Application Services, can be found in the “Service Instance” tab within your SmartCloud Enterprise account.
cynthyap 110000GC4C Tags:  agile cloud-computing cloud_computing cloud provisioning development devops 3,564 Views
DevOps has become something of a buzzword lately but the idea behind it can be truly powerful. Using a combination of technology and best practices to increase collaboration between development and operations teams can accelerate the application development lifecycle while improving software quality and reducing costs.
cynthyap 110000GC4C Tags:  cloud management image virtualization cloud_computing cloud-computing 3,454 Views
The challenges of managing virtualized environments are mounting. The benefits of virtualization—from cost and labor savings to increased efficiency—are being threatened by its staggering growth and the resultant complexity. A critical piece to solving these challenges, as many organizations have already discovered, is image management. Read more: http://ibm.co/SpHTlV
I've shared my thoughts on building a secure and trusted cloud on thoughtsoncloud.com
Hope you will enjoy the reading and provide your comments. Especially wanted to highlight how can we improve Trust ..
Trust in cloud can be established with the same principles that we use for traditional service management (read my earlier post on Cloud Computing Central for details):
cynthyap 110000GC4C Tags:  virtualization provisioning cloud_computing cloud-computing 2,916 Views
Orchestration can be one of those ambiguous concepts in cloud computing, with varying definitions on when cloud capabilities truly advance into the orchestration realm. Frequently it’s defined simply as automation = orchestration.
But automation is just the starting point for cloud. And as organizations move from managing their virtualized environment, they need to aggregate capabilities for a private cloud to work effectively. The automation of storage, network, performance and provisioning are all aspects handled in most cases by various solutions that have been added on over time as needs increase. Even for organizations that take a transformational approach -- jumping to an advanced cloud to optimize their data centers -- the management of heterogeneous environments with disparate systems can be a challenge not simply addressed by automation alone. As the saying goes, “If you automate a mess, you get an automated mess.”
Read more about how cloud orchestration can simplify and accelerate service delivery.
cynthyap 110000GC4C Tags:  virtualization cloud cloud-computing provisioning cloud_computing 1 Comment 5,200 Views
With the proliferation of cloud computing, many businesses are starting to adopt a service provider model—either as a deliberate strategy to establish new revenue streams or, in some cases, inadvertently to support the growing needs of their organizations. This is especially true for companies with diverse needs, whether they’re tech companies with dev teams churning out new apps and services, or business owners driving requirements for SaaS services and cloud capabilities to enhance their data center operations.
Read more about provisioning and orchestration capabilities to meet growing business needs.
Glad to let the cloud computing central members know that I've also started writing on ThoughtsonCloud - the IBM cloud experts blog. Please read my first post on ThoughtsonCloud -about Maximizing the value of cloud for small and medium enterprises (SMEs). and let me know your comments and feedback. Thanks
Cloud Computing is a term that is often bandied about the web these days and often attributed to different things that -- on the surface -- don't seem to have that much in common. So just what is Cloud Computing? I've heard it called a service, a platform, and even an operating system. Some even link it to such concepts as grid computing -- which is a way of taking many different computers and linking them together to form one very big computer.
A basic definition of cloud computing is the use of the Internet for the tasks you perform on your computer. The "cloud" represents the Internet.
Cloud Computing is a Service
The simplest thing that a computer does is allow us to store and retrieve information. We can store our family photographs, our favorite songs, or even save movies on it. This is also the most basic service offered by cloud computing.
Flickr is a great example of cloud computing as a service. While Flickr started with an emphasis on sharing photos and images, it has emerged as a great place to store those images. In many ways, it is superior to storing the images on your computer.
First, Flickr allows you to easily access your images no matter where you are or what type of device you are using. While you might upload the photos of your vacation to Greece from your home computer, you can easily access them from your laptop while on the road or even from youriPhone while sitting in your local coffee house.
Second, Flickr lets you share the images. There's no need to burn them to a compact disc or save them on a flash drive. You can just send someone your Flickr address.
Third, Flickr provides data security. If you keep your photos on your local computer, what happens if your hard drive crashes? You'd better hope you backed them up to a CD or a flash drive! By uploading the images to Flickr, you are providing yourself with data security by creating a backup on the web. And while it is always best to keep a local copy -- either on your computer, a compact disc or a flash drive -- the truth is that you are far more likely to lose the images you store locally than Flickr is of losing your images.
This is also where grid computing comes into play. Beyond just being used as a place to store and share information, cloud computing can be used to manipulate information. For example, instead of using a local database, businesses could rent CPU time on a web-based database.
The downside? It is not all clear skies and violin music. The major drawback to using cloud computing as a service is that it requires an Internet connection. So, while there are many benefits, you'll lose them off if you are cut off from the Web.
Cloud Computing is a Platform
The web is the operating system of the future. While not exactly true -- we'll always need a local operating system -- this popular saying really means that the web is the next great platform.
What's a platform? It is the basic structure on which applications stand. In other words, it is what runs our apps. Windows is a platform. The Mac OS is a platform. But a platform doesn't have to be an operating system. Java is a platform even though it is not an operating system.
Through cloud computing, the web is becoming a platform. With trends such as Office 2.0, we are seeing more and more applications that were once the province of desktop computers being converted into web applications. Word processors like Buzzword and office suites likeGoogle Docs are slowly becoming as functional as their desktop counterparts and could easily replace software such as Microsoft Office in many homes or small offices.
But cloud computing transcends Office 2.0 to deliver applications of all shapes and sizes fromweb mashups to Facebook applications to web-based massively multiplayer online role-playing games. With new technologies that help web applications store some information locally -- which allows an online word processor to be used offline as well -- and a new browser called Chrome to push the envelope, Google is a major player in turning cloud computing into a platform.
Cloud Computing and Interoperability
A major barrier to cloud computing is the interoperability of applications. While it is possible to insert an Adobe Acrobat file into a Microsoft Word document, things get a little bit stickier when we talk about web-based applications.
This is where some of the most attractive elements to cloud computing -- storing the information on the web and allowing the web to do most of the 'computing' -- becomes a barrier to getting things done. While we might one day be able to insert our Google Docs word processor document into our Google Docs spreadsheet, things are a little stickier when it comes to inserting a Buzzword document into our Google Docs spreadsheet.
Ignoring for a moment that Google probably doesn't want you to have the ability to insert a competitor's document into their spreadsheet, this creates a ton of data security issues. So not only would we need a standard for web 'documents' to become web 'objects' capable of being generically inserted into any other web document, we'll also need a system to maintain a certain level of security when it comes to this type of data sharing.
Possible? Certainly, but it isn't anything that will happen overnight.
What is Cloud Computing?
This brings us back to the initial question. What is cloud computing? It is the process of taking the services and tasks performed by our computers and bringing them to the web.
What does this mean to us?
With the "cloud" doing most of the work, this frees us up to access the "cloud" however we choose. It could be a super-charged desktop PC designed for high-end gaming, or a "thin client" laptop running the Linux operating system with an 8 gig flash drive instead of a conventional hard drive, or even an iPhone or a Blackberry.
We can also get at the same information and perform the same tasks whether we are at work, at home, or even a friend's house. Not that you would want to take a break between rounds of Texas Hold'em to do some work for the office -- but the prospect of being able to do it is pretty cool.
More About Office 2.0
Elsewhere On Web Trends
Now 400 millions research papers are available for peace
solution,but there is no result for the same,unless the messages
posted in the website http://www.goldenduas.com are researched by all
the researchers in the world.Otherwise the world cannot peace and
unity for the following reasons.
Thank you very much joining with me in the interest of public,Safety
and peace in the world.Most of my friends and followers are youngsters
and good educated persons involving peace,Unity and safety amongst all
communities in the world and accordingly we sought support from all of
you to study and analyse the God's messages posted in the website
www.goldenduas.com and same may be advertised all over the world on
the reasons that every person are suffering,due to all kind of
naturalcalamaties in the world.Unless God's messages posted in the
website www.goldenduas.com are followed,otherwise No government and
Scientist can safeguard life and liberity of the public of the all
communities in the world according to Quranic verses 17:16 and
28:59.Internet services in the world and requesting support us to
spread our website messages to each and every corner of the world to
know and discuss by all the internet communities in the world.
Holy Bible says:
1."Behold, I send you forth as sheep in the midst of wolves: be ye
therefore wise as serpents, and harmless as doves".
- Matthew 10:16.
2."Be strong, do not fear; your God will come, he will come with vengeance;
with divine retribution he will come to save you". - Isaiah 35:4
Holy Quran says:
28:59. Nor was thy Lord the one
To destroy a population until
He had sent to its Centre
An apostle, rehearsing to them
Our Signs; nor are We
Going to destroy a population
Except when its members
Our website http:www.goldenduas.com contains more information not only
to avoid all kinds of natural calamities in the world but also to12:15
improve economic growths in business, education, employment, jobs,
health, wealth, security, faith, climate changes (heavy snow,rain,heat
etc),and causes unity and peace all over the world.Our service all
over the world is a non-profitable service to all mankind and animals.
Please check our homepage of the website to know our services.
Otherwise, the public of the world will suffer due to all kind of
natural calamities till the day of resurrection and also they will
fail to improve in economy in
Organizations looking to optimize across the application lifecycle recognize the need for enhanced innovation and speed to market. Yet most IT resources are focused on covering the basics, leaving fewer resources to support business agility. The solution: Platform as a Service (PaaS).
IBM’s PaaS solution, IBM SmartCloud Application Services, or SCAS, allows clients to differentiate themselves with built-in flexible services that allow them build and customize cloud solutions their way – leading to a competitive advantage. Companies are using enterprise-class IBM Application Services to measure and respond to market demands, capture new markets, and reduce application delivery and management costs.
First, with IBM Collaborative Lifecycle Management Service, included within SCAS, development teams can establish shared team development environments in minutes – before it used to take weeks. Within hours they can quickly define their development team and begin working collaboratively to respond to business needs.
Another significant benefit of a PaaS approach is the time it takes to get an application deployed and to market. Application deployment can take weeks on a traditional environment but with IBM SmartCloud Application Services, applications can be deployed to the cloud in minutes.
SCAS also allows clients to respond rapidly to changing market conditions by deploying or modifying cloud-centric (“born on the cloud”) or cloud-enabled (legacy applications) quickly and easily. In fact, developers can move from the dev/test environment directly into production with SCAS, taking advantage of proven repeatable patterns contained within the SmartCloud Application Workload Service, thus eliminating human error. These repeatable patterns allow clients to eradicate errors by avoiding manual processes – this drives consistent results, increases productivity, and reduces risk.
IBM SmartCloud Application Services are compatible with the newly announced IBM PureSystems family. For example, through SmartCloud Application Services clients can rapidly design, develop, and test their dynamic applications on IBM's public cloud and deploy those same application patterns on a private cloud built with PureApplication Systems, or vice versa.
IBM SmartCloud Application Services is now in pilot and accepting new client who want to get ready to accelerate their cloud initiatives. Clients won’t pay for SCAS services during the pilot, but will only be charged for the underlying *SmartCloud Enterprise infrastructure used by the services (that’s because SCAS runs on top of IBM’s Infrastructure as a Service offering, SmartCloud Enterprise, or SCE). Existing SCE customers can get up and running on the pilot quickly and start realizing the benefits of PaaS right away.
To be considered for the program, new or existing SCE customers should IBM SmartCloud Application Services web site and click the button on the right titled, “Get a jump on the competition with the SmartCloud Application Services pilot program.”
You can learn more about IBM SmartCloud Application Services with this video, “The multifaceted potential of platform as a service (PaaS) from IBM.”
CLD Partners, a leading provider of IT consulting services with a particular focus on cloud computing, began using SCAS during the beta which launched in 2011 and has now transitioned into the pilot program.
“We share IBM’s vision for how enterprise customers can achieve huge productivity gains by embracing cloud technologies. SCAS allowed us to utilize world class software in a managed environment that greatly reduced the complexity of the deployment while also providing for future scalability that our customers only pay for when they need it,” said Steve Clune, Founder and CEO of CLD Partners. “Ultimately, traditional infrastructure planning and configuration that would have required weeks was literally reduced to hours. And future flexibility as infrastructure needs change is virtually limitless.”
IT Operations, Independent Software Vendors (ISVs), Line of Business, and Application Developers would benefit from the SCAS pilot program. And it doesn’t matter the company size, enterprise or mid-market; all types of businesses can realize value from getting their applications to market faster.
To learn more about the IBM SmartCloud Application Services pilot program, read the Pilot Services Bulletin or visit the Application Services web site.
One of the exciting and valuable characteristics of IBM SmartCloud Enterprise is it's tight linkage with the IBM Software Group portfolio of offerings. In addition to the offerings from IBM Software Group, innovative software vendors are making exciting offerings available as well. There is an ever-growing list of offerings available to IBM SmartCloud Enterprise customers. These recent additions are now in the SmartCloud Enterprise public catalog and available to you to use.
BYOL - Bring Your Own License; PAYG - Pay As You Go
The following BPM images are now available in the catalog:
IBM Process Center Advanced 7.5.1 64b - BYOL
IBM WebSphere Service Registry and Repository (WSRR) is a system for storing, accessing and managing information, commonly referred as service metadata, used in the selection, invocation, management, governance and reuse of services in a successful Service Oriented Architecture (SOA). In other words, it is where you store information about services in your systems, or in other organizations' systems, that you already use, plan to use, or want to be aware of.
The following WSRR images are now available in the catalog:
IBM WebSphere Service Registry 64bit BYOL
IBM WebSphere Message Broker (WMB) delivers an advanced Enterprise Service Bus (ESB) that provides connectivity and universal data transformation for both standard and non-standards-based applications and services to power your SOA.
The following WMB images are now available in the catalog:
IBM WebSphere Message Broker 188.8.131.52 64b BYOL
IBM SPSS Decision Management enables business users to automatically deliver high-volume, optimized decisions at the point of impact to achieve superior results.
The following SPSS image is now available in the catalog
IBM SPSS Decision Management 6.2 64b BYOL
From our partner Riverbed comes Riverbed® Stingray™. This software-based application delivery controller (ADC) designed to deliver faster and more reliable access to public web sites and private applications.
The following Riverbed Stingray images are now available in the catalog:
Riverbed Stingray V 8.0 RHEL 6 32 bit BYOL
Additionally, Alphinat SmartGuide provides visual, drag and drop tools that can help you quickly build interactive web dialogues that guide people to the relevant response, help them diagnose problems or lead them through a series of well-defined steps that make it easy to complete complex—or infrequently performed—tasks.
The following Alphinat SmartGuide images are now available in the catalog:
Alphinat SmartGuide 5.1.3 SLES 11 SP1 32-bit PAYG
GridRobotics' Cloud Lab Grid Automation Server can manage any number of client or agent computers, which can be spun up automatically on public clouds like IBM SCE or private clouds. Grid Robotics’ Cloud Lab Classroom is a virtual classroom management solution.
The following GridRobotics Cloud Lab images are now available in the catalog:
GridRobotics Cloud Lab Grid Automation Base Server 1.4 32b R2 - BYOL
We keep a list of our partners on our Cloud ecosystem partner images page
We are committed to adding value continuously to IBM SmartCloud Enterprise to help you advance cloud in your organization.
Securing the Virtual Infrastructure
Cloud computing tests the limit of security operations and infrastructure from various perspectives. Let us examine what is different about Cloud Security and identify what are existing threats and what are the new areas that we should be concerned about.
Figure 2 Cloud Security - Existing & New Threats
I think what make cloud security complex is the number of layers involved in the cloud service stack and the number of components in each layers. So it means
· Increased infrastructure layers to manage and protect
· Multiple operating systems and applications per server
More Components = More Exposure
As we can see we already do perimeter protection at the network and operating systems as well as do physical and personnel security for the traditional infrastructure. All of them holds good for cloud as well to combat the existing threats at these layers.
us examine what are the new points of exposure with cloud. Security and resiliency complexities are raised
by virtualization and automation which are essentials to cloud. The new risks
· Cloud Service Management Vulnerabilities
· Secure storage of VMs and the management data
· Managing identities on the increasing number of virtual assets
· Stealth rootkits in hardware now possible
· Virtual NICs & Virtual Hardware are targets
· Virtual sprawl, VM stealing
· Dynamic relocation of VMs
· Elimination of physical boundaries between systems
· Manually tracking software and configurations of VMs
For managing these additional complexities, you need a reference model that is comprehensive and covers security controls that can combat not only the existing challenges but also the new challenges that cloud brings in.
IBM Foundational Security controls for IBM cloud reference model (see below) provides the different elements and controls required to build a secure cloud.
Figure 1 Foundation Security Controls for IBM Cloud Reference Model
Managing datacenter identities (Identity and access Management) is one of the top-most security concerns and we discussed how to handle the same in my previous post. I’ll discuss how to handle the virtualization related threats in my next post.
Meanwhile let me know your comments on this reference model. Do you think these set of controls are comprehensive. Do you see any areas not covered from a cloud security perspective? If so, just add it as comment to this post and let us discuss.
Tracy of IBM Systems Events 2700003TG1 2,355 Views
Rethink IT. Reinvent Business.
Join us for the 2012 IBMSmartCloud Symposium event on 16-19 April 2012 in San Francisco, California. This Symposium will help you Rethink IT and Reinvent Business.
This event will introduce Cloud Computing’s disruptive potential to not only reduce cost and complexity but reinvent the way we do business. Over the course of four days, there will be sessions that define cloud computing and discuss transformative benefits and challenges to consider while sharing specific, proven patterns of success. We will provide proven methods to get started on the Cloud journey from the up-front investments to capacity planning. This event will cover the technology behind private and public clouds whether you choose to build your own, leverage prepackaged solutions or have it delivered as a service.
Sessions will explore challenges and solutions for securing, virtualization and performance of mission critical applications as well as automating service delivery processes for cloud environments. We will help you: design, deploy and consume.
Use promotion code A2N for 10% off enrollment!
Managing Datacenter Identities for Cloud
Among top challenges for cloud , I discussed Security as the top concern. I also detailed the top concerns with regard to securing the cloud in the subsequent post. Cloud computing tests the limits of security operations and infrastructure for the various security and privacy domains
Cloud brings in lot of additional considerations like multi-tenancy, data separation, virtualization etc. In a cloud environment, access expands, responsibilities change, control shifts, and the speed of provisioning resources and applications increases - greatly affecting all aspects of IT security. We will discuss the different security aspects classifying them against specific adoption patterns (see post here). The cloud enabled data center pattern is the more predominant one which has Infrastructure and Identity management as the top concerns. Within cloud security doing the right design for the infrastructure security is the important aspect – the details of which and how it is done by different public clouds we discussed in the previous post. Now with regard to Identity lets discuss the top requirements, use cases and look at what solutions that we can provide to make the cloud secure. Lets start with managing datacenter identities which is the top concern.
Managing Datacenter Identities
Identity and Access Control needs to deliver capability that can be used to provide role based access to securely connect users to the cloud. The users include the cloud service provider as well as consumer roles. Within each user groups we need to support User as well as Administrator Roles. The identity and access management should the 4As - Authentication, Authorization, Auditing and Assurance.
§ For a cloud consumer user, it is about making sure the user identity is verified and authenticated at the self service portal and providing right access to the resource pools.
§ For the administrator, we need to provide role based access to Service Lifecycle Management functions
§ We will need to integrate with existing User Directory infrastructure (AD/LDAP/NIS) to extend the user identity to the cloud environment as well.
§ Once in the cloud environment, we need to automatically manage access to the cloud resources, through provisioning and de-provision of resource profiles and users against the resources in the cloud identity and access management systems. Manual processes to manage accounts for users on various virtual systems and applications are not going to scale in a cloud environment. The same is true with the manual processes to process various audit logs to meet compliance and audit requirements
§ In massively parallel, cloud-computing infrastructures involves enormous pools of external users as well. We need to ensure smooth user experience for the users so that they don’t need to enter their credentials multiple times to access various applications hosted within the enterprise or by business partners and Cloud providers.
§ Management of user identities and access rights across hosted, private and hybrid clouds for internal Enterpise users is also a major challenge that includes
o Centralized user access management to on and off-premise applications and services
o Enables Federated Single Sign-on and Identity Mediation across different service providers
Lets look at some of the capabilities that we can leverage to solution these requiremnts.
IBM Security Identity and Access Assurance - provides the following capabilities. These capabilities enable clients to reduce costs, improve user productivity, strengthen access control, and support compliance initiatives.
Sreek Iyer 2000001K7N Tags:  cloud-computing security cloud chapter26 cloudsecurity stepbystep ibmcloud 1 Comment 6,709 Views
Infrastructure Security Design (Public Clouds)
As we discussed in my previous post, transparency or more control is need of the hour with regards to security on the cloud. Let examine how this is done by the popular cloud providers and understand the method and the technologies. We need to secure the infrastructure, network, endpoints, applications, processes, data, and information and overall have a governance to mitigate the risk and meet the compliance. Let us take the infrastructure to begin with.
The key areas for a security team to design for with regards to infrastructure security are
Let us start looking at the public cloud implementations to understand how they are managing these aspects.
Almost all the vendors – IBM, Amazon, Microsoft, Salesforce provide a means to do SSH with keys to the Guest OS. The protocol runs over SSL and is authenticated with a certificate and private key which could be generated by the customer.
IBM LotusLive employs a security approach based on three three-pillars that includes ensuring security rich infrastructure.
We will see how the infrastructure security aspects are dealt with for private clouds in my next post. Stay tuned and keep those comments coming. I’d some of my readers tell me that the blog entries are not showing up fine on Internet explorer. While I will make the effort to fix the issue, please use Firefox or any other browser in the meantime.
And if you these posts interesting dont forget to rate the post (click on the stars) and if you got an extra minute do put in a comment on what apsects you find interesting or need discussion.
Sreek Iyer 2000001K7N Tags:  chapter25 stepbystep cloud-computing ibm security cloud 1 Comment 5,981 Views
Securing the Cloud – What are the top concerns?
IT Security is well researched and matured area. The reason why we have enterprises doing commerce over the web today is because IT Security practices, tools and technologies have matured to establish the trust and have overcome the concerns. As with most new technology paradigms, security concerns surrounding cloud computing have become the most widely talked about inhibitor of widespread usage as discussed in my previous post.
To gain the trust of organizations, cloud services must deliver security and privacy expectations that meet or exceed what is available in traditional IT environments. Let us discuss what’s are the Top Security Concerns when it comes to cloud.
Transparency or Less Control
If we look at the security and privacy domains in cloud, they are no different from the traditional domains. We need to secure the infrastructure, network, endpoints, applications, processes, data, and information and overall have a governance to mitigate the risk and meet the compliance. But in a cloud environment, access expands, responsibilities change, control shifts, and the speed of provisioning resources and applications increases - greatly affecting all these aspects of IT security. The different cloud deployment models like the public, private and hybrid clouds also change the way we think need to about security. The responsibilities are spread across Consumer, Service Resellers and Providers. The immediate risks of these shared responsibility is that nobody gets a holistic view of the security and so less customization of any security controls. Consumers need visibility into day-to-day operations as well as need access to logs and policies. The aspect of less visibility or transparency is mostly the top most concern shared universally.
Data and Information Security
The next primary concern that customers mention related to security on the cloud is related to data and information security. The specific concerns include
§ Protection of intellectual property and data
§ Ability to enforce regulatory or contractual obligations
§ Unauthorized use of data
§ Confidentiality of data
§ Availability of data
§ Integrity of data
A shared, multi-tenant infrastructure increases potential for unauthorized exposure especially in the case of public-facing clouds. Security Administrators need to worry about designing security for applications and data that are publically exposed which can be potentially accessed by anybody on the internet.
Different industries and geographies have different regulations and rules that they need to comply to depending on the workloads and data they put on the cloud. Complying with SOX, HIPAA and other regulations are one risk or issue because of which customers are not ready to put their applications on the cloud. Cloud or no cloud for these sort of workloads comprehensive auditing capabilities are essential.
Security Management - Methods and Tools
Finally customers would need to know how today’s enterprise security controls are represented in the cloud. They need to understand how the security events are monitored correlated and actions taken when needed to keep their infrastructure, workload and data safe. Security coming on the way of high availability is another key concern. IT departments worry about a loss of service should outages occur because of security reasons. If so, when running mission critical applications how soon you can get the environment back at the same level of security is the priority.
Until all of these concerns are addressed and without strong availability guarantees, customers may not be ready to run their apps in the cloud. But things are not that bad as we might think. We will discuss how these aspects can be addressed and what tools and technologies to put to use in the subsequent posts.
Meanwhile I recommend that you read this very interesting whitepaper on “Cloud Security Who do you trust?” which discusses all of these aspects in detail as well as the different security challenges that security introduces.
Cloud Security – The top most concern and Opportunity
First of all, wishing all my readers a
very happy and prosperous year 2012 ahead.
Few things happened towards the end
of the year which was significant to me. IBM acquired Q1 Labs to Drive Greater Security Intelligence and created a New Security Division. I also joined this
newly formed IBM Security Systems team last quarter as a solution architect for cloud security. This is a great time to be looking at cloud security. Happy to be on this new role where I can provide solution to customers to handle their cloud security concerns and make it easy for them to adopt cloud and innovate at a faster rate than before.
In my previous
post, we discussed security as the top most concern why customers and
enterprises are not adopting cloud. As
part of year’s posts, I plan to discuss the various security issues and aspects
of cloud computing.
We will explore to understand what are the unique challenges with Cloud Security and discuss what aspects is important for each customer adoption pattern that we have seen.
We will also learn how the IBM Security Framework can be used to address the various security challenges namely
· Security governance, risk management and compliance
· People and Identity
· Data and information
· Application and process
· Network, server and endpoint
· Physical infrastructure
Looking forward to your comments and inputs in this journey of understanding the security requirements for cloud and how we can overcome this major challenge to cloud adoption using the World’s Most Comprehensive Security Portfolio – IBM Security Systems. I’ll try and elaborate the IBM Point of View on cloud security and discuss the architectural model to address the security requirements for cloud. Stay tuned and keep those comments and inputs coming.
cynthyap 110000GC4C Tags:  automation image provisioning computing management cloud-computing cloud 3,664 Views
With the barrage of cloud news constantly hitting the market, it can be challenging for organizations to differentiate between all of the solutions and capabilities out there.
But with the latest cloud offering from IBM, the value proposition is quite simple—you get a low-cost, low-risk entry to cloud computing with compelling features. This is especially important for organizations who are still trying to leverage the cost savings of virtualization.
Our customers have told us they’re looking to cloud computing to increase agility—the ability of IT to evolve and meet business needs—and they’re looking for ways to control expenses related to IT investments. They also want to reduce IT complexity while at the same time increase utilization, reliability and scalability of IT resources. And they are looking for the ability to expand capabilities gradually, as their needs change and grow.
In designing a solution to meet all of these needs, we developed IBM SmartCloud Provisioning. Using industry best practices for cloud deployment and management, this new solution allows organizations to quickly deploy cloud resources with automated provisioning, parallel scalability and integrated fault tolerance to increase operational efficiency and respond to user needs.
The name doesn’t tell the whole story though. IBM SmartCloud Provisioning is a full-featured solution wrapped up in an easy-to-implement package. That means you get:
· Rapidly scalable deployment designed to meet business growth
· Reliable, non-stop cloud capable of automatically tolerating and recovering from software and hardware failures
· Reduced complexity through ease of use and improve time to value
· Reduced IT labor resources with self-service requesting and highly automated operations
· Control over image sprawl and reduced business risk through rich analytics, image versioning and federated image library features
Using this technology, we’ve seen customers get a cloud up and running in just hours—realizing immediate time to value. It’s fast—administrators have been able to go from bare metal to ready-for-work in under five minutes, or start a single VM and load OS in under 10 seconds, or scale up to 50,000 VMs in an hour (50 nodes).
But ultimately, these IT benefits have translated to business benefits—customers have been able to see how cloud computing can impact their business, and how they can accelerate the delivery of new services to drive revenue.
Sreek Iyer 2000001K7N Tags:  water cloud cloud-computing mullaperiayar water-management 6,701 Views
Possible Solution for Mullaperiyar Dam Issue ?
While I’m writing this blog, the Ministers of Tamil Nadu and Kerala are having a meeting with Prime Minister to discuss the contentious issue of Mullaperiayar at length. For those who don’t know about this issue, this is about the Mullaperiayar Dam in south India. Mullaperiyar Dam is a masonry gravity dam over River Periyar and operated by the Government of Tamil Nadu based on a 999-year lease agreement. The catchment areas and river basin of River Periyar downstream include five Districts of Central Kerala, namely Idukki, Kottayam, Ernakulam, Alappuzha and Trissur with a total population of around 3.5 million.
This dam is at the centre stage again in the wake of reports that the dam is weakening due to increase in incidents of tremor in Idduki district in Kerala. Ministers from Kerala are seeking Central Government intervention in ensuring the safety of the dam. At the same time, Tamil Nadu is insisting on increasing the water level in the reservoir for enhancing water supply to the state. While Tamil Nadu wants to increase the water-level in the reservoir, Kerala has been insisting that it be reduced from the current 136 feet to 120 feet.
Currently I don’t think we have clear metrics on the exact usage of water by each state, what is right level of water to be retained by the dam, what are the risks etc. We have been relying on data that we have from the past.
However you look at it -- whether too much or not enough, the world needs a smarter way to think about water. We need to look at the subject holistically with all the other considerations as well. We use water for more than drinking. We need to make an inventory of how much water we get and how is it used – of industries, irrigation, etc. This is where I think we need smarter ways to manage the water in the best possible way that addresses both states requirements adequately.
IBM Smarter Water Management can help us think in a smarter way about water. For instance IBM is helping the Beacon Institute to do source-to-sea real-time monitoring network for New York’s Hudson and St. Lawrence Rivers as well as report on conditions and threats in real time. There are many other case studies across the globe on IBM Smarter Water Management.
Those interested in the problem and the possible solutions should
definitely read IBM’s broader outlook on Water Management as covered in the Global Innovation Outlook.
Rivers for Tomorrow is another interesting partnership between IBM and The Nature Conservancy. IBM is providing a state-of-the-art support system for a free, online application that will provide easy access to data and computer models to help watershed managers assess how land use affects water quality.
Though it's a worldwide entity, water is treated as a regional issue. I think we should try putting technology to use to solve our water problems. The solution should be more instrumented, interconnected and intelligent system that can not only take into consideration the realtime monitoring of the river but also include early warning systems to notify risks related to earth quakes etc. IBM’s Strategic Water Management Solutions include offerings to help governments, water utilities, and companies monitor and manage water more effectively. The IBM Strategic Water Information Management (SWIM) solutions platform is both an information architecture and an intelligent infrastructure that enables continuous automated sensing, monitoring, and decision support for water management operations.
And you might be wondering what has this to do with Cloud and why is this post on cloud computing Central. For these solutions and platforms to be successful it is highly important that we have energy efficient high-performance computing platforms and complex sensor, metering, and actuator networks. Such platform needs and flexible choices of having the solution on-premise as well as leverage different delivery models can only be supported through a cloud.
I think we should just leverage these solutions on the cloud to solve this issue and keep all the states and its people happy :-).
Sreek Iyer 2000001K7N Tags:  cloud challenges stepbystep cloud-computing 1 Comment 23,087 Views
Top 5 Challenges to Cloud Computing
In my previous post, we looked at understanding the different adoption patterns – i.e. how customers are turning towards cloud. Some of the key reasons of the “why” are listed below
While all of these are good, there are still many yet to get on to this cloud computing train. Let’s explore what are their key concerns or challenges why they are reluctant to jump in. The following are inputs that I’ve got from various analyst studies and resources on the internet.
I plan to discuss more on what are the perceived and real threats related to Security and Privacy in my subsequent posts. In my new role, as an Architect for IBM Security Solutions, I’ll like to discuss the details on what IBM tools and technologies you could use to overcome the issues.
Meanwhile keep those comments coming and I look forward to them to understand what other areas you think are key concerns to be addressed to accelerate adoption of cloud.
IBM Tech Trends Report says during the next 2 years 75% of organizations will engage in cloud computing
RHyman 06000032P4 Tags:  it it-pros #ibmcloud clouds tech-trends developerworks cloud social cloud-computing mobile #techtrends professionals tech trends techtrends developers analytics survey cloudcomputing 9,330 Views
The IBM Tech Trends report is out! We asked, you answered. Check out the results of IBM developerWorks' 2011 Tech Trends survey and find out what more than 4,000 IT professionals -- your peers -- have to say about the future of technology, including their opinions on cloud computing, business analytics, mobile computing, and social business.
The report provides insight from the worldwide IT development community into the adoption, preferences and challenges of key enterprise technology trends including cloud, business analytics, mobile computing, and social business. The results also provide guidance on areas where IT professionals like you say they need help with skills to develop new technologies and platforms that will be in demand in the coming years.
As we focus in on cloud, there is absolutely a growing trend in cloud computing to view it as more than just cheap infrastructure. Companies are now exploring the possibility of developing applications in the cloud (you guys are already doing that) many of them related to mobile development.
Currently the biggest challenge is integrating the cloud into application development as the reduction of operating expenses is the driver of this move. We still have a way to go however with 40% of the survey responders saying their company is not yet involved in cloud currently. Hmm, interesting right.
The cool news is that the expectation from those same responders is that over the next two years 75% of the IT professionals responded that they expect that this will change and that theirs and other enterprises will take to building cloud infrastructure.
Understanding the Cloud Adoption Patterns
I did discuss the - The Next Big thing – Cloud enabled business model Innovation in my previous post. But you may be asking where do I start. That’s where I guess Cloud Adoption Patterns work that IBM has pioneered is going to help. This is some great analysis - Cloud Adoption Patterns that IBM have done based on thousands of cloud engagements that we have done so far. This analysis is a good abstraction of the ways organizations are consuming cloud -- a good starting /entry point discussions on cloud.
The four most common entry points to cloud solutions are discussed in the picture above. I love these videos on youtube - Cloud Adoption Patterns that tells you the essence of these patterns in less than 2 minutes.
· Cloud-Enabled Data Center – to achieve better return on investment and manage complexity by extending virtualization well beyond just hardware consolidation.
· Cloud Platform Services – to accelerate time-to-market by creating, deploying and managing cloud applications.
· Business Solutions on Cloud – to access enterprise-level capabilities through a provider’s applications running on a cloud infrastructure; to improve innovation and flexibility while minimizing risk and capital expense.
· Cloud Service Provider – to innovate with new business models by building, extending, enabling and marketing cloud services.
For each of these patterns of cloud adoption, we have defined a set of proven projects that it supports with software, services and solutions to help businesses streamline the implementation of their chosen cloud capabilities.
While the Cloud Enabled Data Center pattern is the case for most of the private cloud implementation. Most customers start with providing infrastructure as a service on the cloud. This pattern also discusses how we can share infrastructure across multiple projects and drive benefits. This also discusses a lot of automation in the operation and business process that’s possible to have a responsive IT department that can help the business to be agile.
The next level of gain or reuse would be run your workloads on a shared stack of middleware. Platform as a Service Pattern is an integrated stack of middleware that is optimized to execute and manage different workloads, for example, batch, business process management and analytics. This middleware stack standardizes and automates a common set of topologies and workloads, providing businesses with elasticity, efficiency and automated workload management. A cloud platform dynamically adjusts workload and infrastructure characteristics to meet business priorities and service level agreements. All the layers below understanding what workloads are running on top of it and optimizing self is going to help run these workloads more efficiently and at a lower cost. The Cloud Platform Services adoption pattern can improve developer productivity by eliminating the need to work at the image level so that developers can instead concentrate on application development.
Business solutions pattern maps to the SAAS model where you leverage cloud to innovate with speed and efficiency to drive sales and profitability. In these we look at creating and consuming business solutions on the cloud. Some of the key offerings in this space are things like business process design, social and collaboration tools, supply chain and inventory, digital marketing optimization, B2B integration Services etc. These generic services consumed from the cloud relieves you of the pain of setting up things from scratch as well as enable you to scale based on your demands.
The Cloud Service Provider (CSP) Pattern is the one that most of the Telcos adopt when they have to service multiple consumers with a single cloud solution. We provide tools and technologies to design and deploy highly secure, multi-tenant cloud services infrastructure that can integrate nicely with plenty of 3rd party applications.
As we understand it is easy to do the IaaS pattern and more work to do when we implement SaaS or CSP patterns. But the gain is more when we do sharing at the software or application level. Depending on where you are in your current IT Environment, you can pick up and implement any of these patterns that suit you. The work that we have done to analyse these patterns and provide a consistent set of technologies and tools to build out these patterns should make life easy for you. Leverage it –less pain and more to gain.
There's still time to sign up for the IBM webcast: Managing the Cloud – Best practices for cloud service management
The Next Big thing – Cloud enabled business model Innovation
I remember the day when one of our Executives - Nick Donofrio visited us in India. He is like the chief mentor for all the members of the IBM technical community and he has seen IBM and the IT industry for many years. He was addressing a Technical Exchange event few years ago and then someone in the audience asked him this question – “Sir , you have seen technology for so many years now – can you tell us what’s going to be the next big thing in terms of invention/innovation”. Everyone was all ears waiting for the answer - is it the next version of the internet, the search, a web2.0 application or may be an intelligent mobile app. But his answer was that he believes that there is not going to be any next big thing in technology. The next big thing for all of us is going to be Business model innovation. Even today his statements holds very true. Businesses that are able to reinvent their business model are succeeding and managing to stay on top and others vanish from the scene.
There are lots of innovative and technical things happening all around us like
I believe the next big thing is going to be how well you can use all these elements for business seamlessly and cost effectively. The key to succeed is to use technology to do this business model innovation and do it faster.
How do you do it faster ?-- The answer is cloud. This is something that I’m saying based on the data that IBM has got analyzing over 2000 customers cloud adoption patterns. All of them have seen the below advantages with Cloud.
Considering all these factors, I think the next big thing is Cloud Enabled Business Model Innovation. I was able to relate with some of the latest announcements that we have made in the cloud easily because they are just restating my same belief. As discussed in this interesting video by IBM's Saul Berman (Innovation & Growth Leader), 60% of the customers that IBM interviewed is saying they would consider cloud immediately and 70% of the them intend to use cloud to enable business model innovation. Based on the rate at which they adopt the new technologies they may be an Optimizer (looking at improving existing model), Innovator (looking at new model) or a Disruptor (who is ready to bring in game changing ideas).
So as today’s IT leaders, let us broaden our focus from merely delivering technology to solving larger business issues. One great opportunity for that is to tune in or be present for the SWG Universe India 2011. You will get a chance to listen to some great speakers who will talk about how to use cloud for business model innovation.
Cloud enabled Business Model Innovation I feel is the next big thing that could change IT and Businesses. – So come let’s Rethink IT & Reinvent Business
cynthyap 110000GC4C Tags:  provisioning cloud service cloud_computing management virtualization 3,074 Views
Today IBM announced new SmartCloud Foundation capabilities to help organizations realize the potential of cloud computing. Watch the replay of the IBM SmartCloud launch webcast, to learn more about how the new announcements, including IBM SmartCloud Provisioning (delivered by IBM Service Agility Accelerator for Cloud), can help customers move beyond virtualization to more advanced cloud deployments.
In order for me to be responsive to your reading interests and learning needs, I thought I'll take a short feedback that will help me understand your reactions to my blog. Request your response by taking this short survey. This should not take more than
You can see all the blog entries in this category by clicking on the tag "stepbystep" If you liked any entry in the blog, please rate it by clicking on the "star" or feel free to provide your comments and inputs through this feedback form.
You can access the feedback form here.
Look forward to your comments and inputs.
Sreek Iyer 2000001K7N Tags:  cloud_computing tivoliindia ibmswuin ibmindia cloud stepbystep 3,792 Views
I've been writing about the step by step approach to Cloud till now. The rate at which I see cloud computing being adopted inside and outside the Enterprise, I think we really need to get out of our step-by-step approach and start riding the wave. IBM has implemented may be over 2000 cloud engagements in the last year and are managing over 1 million virtual machines today. We have identified the customer cloud adoption patterns and entry points to cloud and have lots of lessons learnt and experience to share. So won’t it be nice if we could talk to you about the things as well as share the best practices with you. All of it is difficult to discuss through a blog. So You have a better option – The IBM Software Universe 2011 – The Next Big Wave.
Yes, the 7th edition of IBM India’s largest annual software conclave is happening this year Oct 19th and Oct 20th. I believe it would be time well spent to learn from our learnings and accelerate your adoption of cloud. We have some interesting sessions on Private Cloud [R]Evolution which will discuss some of the key trends and technologies to look at for building the cloud insider your firewall. If you are looking to understand how to expand your existing Data Center capabilities to have better visibility, control and automation across your physical and virtual environments then “Integrated Service Management – Thinking Beyond the Data Center” is a must attend session. If you are one of those business or Enterprise IT Manager who is looking to start with the cloud – you don’t want to miss the “Get Your Head in the Cloud” session which can tell you how you could get some of your collaboration requirements from the cloud.
Finally it is wonderful opportunity for you to talk to some of the Distinguished Engineers and IBM Fellows who can spend 1:1 time with you to listen about your issues/problems as well as discuss the future roadmap. For instance, Bala Rajaraman who is the Distinguished Engineer with responsibilities including the architecture and design for Cloud & Service Management solutions is going to be in India and it is your opportunity to catch up with Bala.
Last but not the least, there is going to be Solution Expos that will be setup for you, so you have a opportunity to touch and feel the cloud solutions. This should include industry specific demos and technology/product demos from IBM as well as partners.
So be there on Oct 19, 20th at the IBM Software Universe 2011. It is going to teach you a new skill – How to ride the next big wave… the cloud wave..
cynthyap 110000GC4C Tags:  management virtualization service cloud managing monitoring cloud-computing 3,486 Views
Join us for the Managing the Cloud Webcast series to learn more about best practices, technical approaches and capabilities to help solve your business and technical challenges in the cloud. Sign up for these free 1 hour webcasts today.
Best practices for cloud service management - Nov 8, 12-1EST
Organizations today are looking to cloud computing to deliver cost savings and faster service delivery. However, most organizations are still struggling to have the basic IT infrastructure that is necessary to take the leap to a robust cloud. This session will explain how service management can help provide the essentials to maintain service levels in the cloud and best practices based on IBM's work with customers. This information will provide the foundation for building and managing a cloud to meet your business objectives and transform IT.https://www14.software.ibm.com/webapp/iwm/web/signup.do?source=swg-tivoli-nov8managingcloud Performance management in the cloud - Nov 15, 12-1EST Cloud services can leverage everything from databases to mainframe transactions to SOA services, so the ability to see how all these different touch points are performing is critical. See how integrated service management can provide the capabilities you need to monitor and manage today's cloud based services and help you meet your service level goals.https://www14.software.ibm.com/webapp/iwm/web/signup.do?source=swg-tivoli-nov15managingcloud
Performance management in the cloud - Nov 15, 12-1EST
Cloud services can leverage everything from databases to mainframe transactions to SOA services, so the ability to see how all these different touch points are performing is critical. See how integrated service management can provide the capabilities you need to monitor and manage today's cloud based services and help you meet your service level goals.https://www14.software.ibm.com/webapp/iwm/web/signup.do?source=swg-tivoli-nov15managingcloud
Chapter 19 – Tivoli Process Automation Engine
As we discussed in the previous post, it is important that the all the processes work together to bring successful automation in the cloud management platform. A process workflow automation engine is what makes this possible. In this chapter we will discuss more about Tivoli process automation engine that’s form the base for IBM process automation in the cloud space.
process automation engine provides a user interface, configuration services, workflows and the common data system needed
for IBM Service Management products and other services. As we already know IBM
Service Management (ISM) is a comprehensive and integrated approach for
Service Management, integrating technology, information, processes, and people
to deliver service excellence and operational efficiency and effectiveness for
traditional enterprises, service providers, and mid-size companies. Tivoli process automation engine, previously known as Tivoli base services, provides
the base infrastructure for applications like Tivoli Maximo Asset Management,
Change and Configuration Manager Database (CCMDB), Tivoli Service Request
Manager (SRM), Tivoli Asset Management for IT (TAMIT), Tivoli Proivisioning
Manager as well as Tivoli Service Automation Manager. Any product that has the Tivoli process automation engine as its foundation can be
installed with any other product that has the Tivoli process automation engine.
IBM Service Management (ISM) comprises of
Through having a common process automation engine, the we can successfully link Operational and Business services with Infrastructure through a single (J2EE) platform. We can also leverage current investments through linking this engine with existing process automation technologies and products. So by building a unified platform to automate processes, we have taken data integration to the next level where sharing data between applications has never been easier. This integrated process automation platform can support the repeatable IT functions like Incident Management, Problem Management, Change Management, Configuration Management all the way through to Release Management. All of these processes tie into the CMDB where they share consistent data via bidirectional integration. The platform supports best practices such as ITIL and other Industry best practices. This facilitates an automated approach across the IT management lifecycle. It's also forms the basis for automating repetitive tasks that can be handled by the system instead of requiring human (costly) intervention. TPAE through the adapters provide data federation from multiple sources that you already have and translating the information into usable data that can be leveraged by internal process and workflow.
Figure 1 Tivoli process automation integrated portfolio
The Tivoli Process Automation Engine Wiki provides details on each of the components and capabilities that make up this integrated portfolio.
The Certification Study Guide Series : Foundations of Tivoli Process Automation Engine is a IBM® Redbooks publication that can guide you to get an IBM Professional Certification on Tivoli Process Automation Engine.
Brocade and Avnet Technology Solutions Bring Simplified Server and Desktop Virtualization Solutions to the Channel Through the Brocade CloudPlex Architecture
JeffHebert 060001UEQ2 Tags:  enterprise paas cloud switching saas emerging network iaas storage technology 3,148 Views
JeffHebert 060001UEQ2 Tags:  switching enterprise saas cloud networking paas iaas storage 3,304 Views
In a cloud service provider environment, there are various business processes and compliance that needs to be addressed before the environment can go live / operational. The following are the areas that need to be designed are the following:
IBM's strategy differentiates from other vendors in that it is focused on bridging business and IT processes using a common software framework with common services, including process automation and security services. IBM Service Management is built on the Tivoli Service Management Platform and wrapped with best practices, methodologies and services, to help you deliver services to your customers effectively and efficiently.
We provide an Integrated Solution that represents the full management of data, processes, tooling and people. The key differentiator is a common data model that all the core solutions can share for simple data sharing. It is important that the all the processes work together. A process workflow automation engine is what makes this possible. We will discuss more about this common workflow process automation engine in the next post.
FleetCor Selects Brocade to Provide Cloud-Optimized Network Services for 500,000 Commercial Accounts
Leading Fuel Card Provider Values Brocade Market Leadership, Reliability and Network Security
SAN JOSE, CA -- (MARKET WIRE) -- 07/19/11 -- Brocade (NASDAQ: BRCD) today announced that FleetCor, a leading independent global provider of specialized payment products and services to businesses, commercial fleets, major oil companies, petroleum marketers and government fleets, has selected Brocade as the vendor to build its cloud-optimized network. This new network enhances FleetCor's ability to securely process millions of transactions monthly and ultimately better serve its commercial accounts in 18 countries in North America, Europe, Africa and Asia.
Millions of commercial payment cards are in the hands of FleetCor cardholders worldwide, and they are used to purchase billions of gallons of fuel per year. Given this volume of network-based transactions, network reliability, scalability and security were critical factors for FleetCor to consider in its selection process to maintain superior customer satisfaction.
In addition, FleetCor selected Brocade as its networking expert to help evolve its data center and IT operations into a more agile private cloud infrastructure. Brocade® cloud-optimized networks are designed to reduce network complexity while increasing performance and reliability. Brocade solutions for private cloud networking are purpose-built to support highly virtualized data centers.
"When we evaluated networking vendors to build our private cloud, we looked at market leadership and non-stop access to critical data," said Waddaah Keirbeck, senior vice president global IT, FleetCor. "Brocade cloud-optimized networking solutions are perfect for our data centers because they allow us to optimize applications faster, virtually eliminate downtime and help us meet service level agreements for our customers. Moving to a cloud-based model also provides us the flexibility to make adjustments on the fly and access secure information virtually anywhere and anytime."
FleetCor installed a Brocade MLXe router for each of its three data centers, citing scalability as a major driver for the purchase. This approach enables FleetCor to virtualize its geographically distributed data centers and leverage the equipment it already has, at the highest level, to achieve maximum return on investment. The Brocade MLXe provides additional benefits for FleetCor by using less power and has a smaller footprint than competitive routers; critical in power-and space-constrained locations in order to allow for growth. The Brocade MLXe also enables continuous business operation for FleetCor based on Multi-Chassis Trunking, massive scalability supporting highest 100 GbE density in the industry with no performance degradation for advanced features like IPv6 and flexible chassis options to meet network and business requirements.
The Brocade ServerIron ADX Series of high-performance application delivery switches provides FleetCor with a broad range of application optimization functions to help ensure the reliable delivery of critical applications. Purpose-built for large-scale, low-latency environments, these switches accelerate application performance, load-balance high volumes of data and improve application availability while making the most efficient use of the company's existing infrastructure. It also delivers dynamic application provisioning and de-provisioning for FleetCor's highly virtualized data center, enables seamless migration and translation to IPv6 with unmatched performance.
As an added benefit for its bottom line, through the use of Brocade ADX Series switches and Brocade MLX™ Series routers FleetCor has eliminated thousands of costly networking cables, saving it hundreds of thousands of dollars and allowing the company to segment, streamline and secure its network. FleetCor has also been able to easily integrate Brocade network technology with third-party offerings already installed in the network, for complete investment protection. FleetCor anticipates moving to 10 Gigabit Ethernet (GbE) solutions for its backbone switch in the near future.
"We wanted a dependable, secure, redundant, 24 by 7 backbone switch in each of our data centers to help us leverage the benefits of cloud computing and the Brocade MLXe delivered on all fronts," said Keirbeck. "By virtualizing our data center, Brocade allows for non-stop access to the mission-critical data that FleetCor and its customers rely on every day. We chose the Brocade MLXe because of the tremendous results we already saw from our existing Brocade solutions and the exceptional support and service."
According to a report from analyst firm Gartner, "Although 'economic affordability' is an immediate, attractive benefit, the biggest advantages (of cloud services) result from characteristics such as built-in elasticity and scalability, reduced barriers to entry, flexibility in service provisioning and agility in contracting."(1)
Social Media Tags: Brocade, LAN, Local Area Network, ADX, ServerIron, MLX, MLXe, reliability, scalability, security
(1)Gartner " Cloud-Computing Service Trends: Business Value Opportunities and Management Challenges, Part 1" February 23, 2010
Brocade, the B-wing symbol, DCX, Fabric OS, and SAN Health are registered trademarks, and Brocade Assurance, Brocade NET Health, Brocade One, CloudPlex, MLX, VCS, VDX, and When the Mission Is Critical, the Network Is Brocade are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners.
Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government.
AndyGroth 060000C0EQ Tags:  smartcloud paas vmware kvm cloud ibm_workload_deployer red_hat 6,338 Views
Note: This is a (slightly updated) re-post from a personal blog - just my view in the context of IBM's drive to foster open choice and collaboration.
Please bear in mind that this is based on my personal thoughts (not an official IBM position) and read the article as it is intended to be - thought provoking ... enjoy!
Having returned from the European Red Hat Partner summit and the VMware vForum where I presented on behalf of IBM, it took me a while to digest the “openness” of it all …so let me share my thoughts retrospectively.
Being proprietary rocks…! (?)
Public cloud can only exist on open source … ?
There was a bold statement by a speaker at the Red Hat summit: “Public cloud can only live on open source!”
In the meanwhile keep your eyes peeled and expect the industry to increase focus on enabling hybrid connectors - I obviously can't make any specific forward-looking statements from an IBM perspective. But just take Red Hat as an example, it made clear that CloudForms (their IaaS platform) can indeed manage VMware though their DeltaCloud driver and – while currently positioning CloudForms for private and hybrid – their vision (of course) is for DeltaCloud to be the top-level public layer linking into private (or public) VMware clouds.
Red Hat recently announced their hosted “OpenShift” PaaS platform which essentially allows developing and running Java, Ruby, PHP and Python applications and comes in 3 different editions. From 1) “Express” (free) which provides a runtime environment for simple Ruby, PHP and Python apps over 2) “Flex” for multi-tiered Java and PHP apps with more options (like mySQL DBs and JBoss middleware) to full control with the “Power” edition supporting “any application or programming language that can compile on RHEL 4, 5, or 6″ and enables to deploy apps directly on EC2 and (in the near future) to IBM’s SmartCloud.
VMware had before announced their own open “Cloud Foundry” PaaS project, it has incarnations as fully hosted service (currently in beta), as open source project (CloudFoundry.org) or a free single PaaS instance for local development use.
So what's IBM doing in this space? IBM has recently announced the IBM Workload Deployer - an evolution of the WebSphere CloudBurst hardware appliance. It essentially stores and secures "WebSphere Application Server Hypervisor Edition images" and more importantly workload patterns which can be published into a cloud. These workload patterns (think of them as customizable templates that capture settings, dependencies and configuration required to deploy applications) enable you to focus on what essentially differentiates PaaS from IaaS ... the application rather than the infrastructure. Dustin Amrhein explains this much better than me in this little blog.
So, yes, I honestly believe that KVM has a good chance to become hypervisor of choice for public cloud. However … that is unlikely to be the control point… . So which management platform(s) will take that all important crown …? Will it be an OSS based one? I don’t want to hazard a guess, there are many …and that is part of the problem, many argue that the open source “communities” will have to overcome a challenge and become a COMMUNITY if they want to succeed. ESX could not be beaten with 7 or 8 different (but weak) flavours of Xen and that was just a single OSS project splintered by commercial offerings … in the same way the sea of OSS based cloud controllers with eucalyptus, openstack, cloudstack, deltacloud, opennebula faces focussed (more proprietary) heavy-weights like Microsoft, Google and Amazon.
The increasing number of OSS management solutions and “open bodies” will also make e.g. VMware less nervous than intended as long as they indirectly compete with each other …
BUT (and it’s a big “but”) I would argue that anyone not strategically looking at these open solutions is at best ignorant or – e.g. if you are a service provider yourself – more likely long-term professionally suicidal … yes, in an ideal world everyone wants ‘today’s best of breed' but more critically you have to maintain your negotiation potential through the ability to switch and if only for that reason alone you need to keep your options open!
It will be of the utmost importance to partner with solution providers who share this mind-set and have the capability and strategy to support such a long-term goal and yes, IBM is clearly uniquely positioned to fulfill this role.
And while I spoke to many completely different clients at both events, that was a common concern raised by most of them.
Industry endorsement like the recent OVA announcement - with IBM being a major driving force and supporter - will help to give KVM the needed credibility and weight … I am looking forward to seeing these visions translated into tangible solutions.
- Test Drive the IBM SmartCloud with this simulator...
- CloudForms (IaaS) is in beta with availability planned for fall 2011
JeffHebert 060001UEQ2 Tags:  storage scalable secure paas emerging technolgy iaas networking servers cloud available saas reliable 4,307 Views
Great Video. There are a great many folks that have already started making the journey into the clouds and are not fully aware; If you consider that most of all large Enterprise Data centers are consolidating and visualizing servers, storage and networking today, and after all, when you get all 3 of those areas consolidating and visualizing you are transforming business processes and will eventually reach a point when infrastructure/information on demand will be the next logical step.
Cloud Service Provider Platform (CSP2)
Till now we have seen through the earlier posts – what are
the essentials to go about creating a cloud environment – that consists of the management
platform as well as the managed environment. We have seen the critical
roles and organizations involved as well as the importance of Cloud
Service Strategy and Cloud
Service Design. We also saw the criticality of the need for a Cloud
Computing Reference Architecture (CCRA) to tie all the solution elements
together. We also saw how IBM
Service Delivery Manager (ISDM) which is an enterprise cloud solution based
Service Automation Manager (TSAM) can be deployed as a set of virtual
images that automate IT service deployment and provide resource monitoring,
cost management, and provisioning of services in the cloud.
The IBM Cloud Service Provider Platform is specifically tailored to the needs of CSPs and is designed to help them successfully:
Figure 1 IBM Integrated Service Management Solution for Cloud Service Providers
IBM Cloud Service Provider Platform is an integrated Service Management for Cloud Service Providers is built upon around a core Service Automation and Management component provided by ISDM. Beyond the core, IBM’s Integrated Service Management for Cloud Service Providers makes available four extensions—network management, security management, storage management, and advanced monitoring and service level management—that enables a comprehensive management offering.
Communications service providers (CSPs) around the world are looking for smarter ways of doing business. They are being challenged to transform the way services are created, managed, and delivered. CSP2 neatly integrates and extends the SPDE (Service Provider Delivery Environment) for Communication Service Providers to build the ecosystem to become a cloud service provider. For a cloud based business strategy - check out the video from Scott on the value of CSP2 for CSPs.
In this article learn how to:
Set up a 64-bit Linux instance (a Bronze-level offering) with the Linux Logical Volume Manager (LVM).
Capture a private image and provision it as a new Platinum instance.
Grow the LVM volume and file system to accommodate the new physical volumes.
Configure LVM across physical volumes using Linux LVM-type partitions.
Background on LVM and the test scenario
First, a description of LVM concepts and the test scenario for those who may not be familiar with LVM.
Note: You are about to configure Linux LVM: Here be Dragons. Mind the gap.
The Linux LVM is organized into physical volumes (PVs), volume groups (VGs), and logical volumes (LVs)
Sreek Iyer 2000001K7N Tags:  stepbystep tsam chapter16 cloud_certification tivoli cloud-computing cloud isdm 1 Comment 5,561 Views
Capacity Planning for the Management Platform
The management platform sizing means sizing for the following components that provides the functional capabilities
Further the sizing will be affected based on the non-functional consideration that needs to be addressed by each of these components of the management platform. One should review the performance reports and workload pattern/handling capabilities of each of the products selected to validate the sizing considered can meet the non-functional requested by the solution.
The size of the management platform depends on the size of the managed environment. It is
preferred to keep a centralized management environment and scale it as needed
when the managed environment grows. This is often not an easy calculation or simple process. Need to apply pure engineering to plan the capacity for each capabilities. Apart from the capabilities discussed above, the following key areas also needs to be covered
Tivoli Service Automation Manager Version 7: Capacity Planning Cookbook is an excellent document covering the various aspects in detail as well as provide some samples.
This book also gives links to some of the other whitepapers that provides for interesting further reading material on the subject.
JeffHebert 060001UEQ2 Tags:  virtual paas enterprise elastic secure cloud iaas saas scalable ibm reliable 6,116 Views
JeffHebert 060001UEQ2 Tags:  iaas server ibm analytics virtualize cloud saas software paas storage 4,974 Views
How do I size my cloud?
A cloud is not a cloud if it is not elastic. The elastic property of the cloud to expand and shrink based on demand is possible only with a proper capacity planning. I feel the most difficult exercise to do while making a cloud solution is capacity planning for your cloud. By this, I mean you have to size
Most of the engagements that I’ve walked into might have some capacity or infrastructure that they want us to leverage and use it in the cloud. So the comparison becomes difficult if you don’t have a standard measuring unit for your infrastructure – for instance how do you know a Quadcore on an intel platform compares to power7 core. So I found a good explanation in this guide, in this interesting article –
The answer to the difficult question was to use something called the cloud CPU unit which is nothing but the computing power equal to the processing power on a one gigahertz CPU. When a user requests two CPUs, for example, they will get the processing power of two 1 GHz CPUs. This means that a system with two CPUs, each with four cores, running at 3 GHz will have the equivalent of 24 CPU units (2CPUs x 4Cores x 3GHz = 24CPU Units).
The other dimension of the complexity is to determine the resource needs and do the trends and forecasting. I typically collect the projections from the clients and then put down some critical assumptions to determine how big my cloud should be. Some critical questions that I typically ask
IBM infrastructure planner for cloud made life easy for me that had a user friendly interface to take me through these steps and arrive at a sizing for the managed environment. Once we know the managed environment, we can make the sizing of the management platform. The details of how to plan the managed environment, I’ll discuss in my next post.
I’ll be interested in putting together the top 10 parameters
that are critical for sizing the cloud managed and management environment. Look forward to your comments.
JeffHebert 060001UEQ2 Tags:  brocade switching iaas networking paas ibm cloud saas 4,115 Views
In Collaboration With Ixia, Brocade Will Demonstrate the Performance, Reliability and Advanced Feature-Set of the Industry's First 100 GbE Terabit-Trunk Router
LAS VEGAS, NV -- (MARKET WIRE) -- 05/09/11 -- - INTEROP 2011 -- Brocade (NASDAQ: BRCD) today announced that it will work with Ixia (NASDAQ: XXIA) to replicate mission-critical service provider environments and test high-capacity Brocade® Ethernet network solutions designed to help service providers become cloud-optimized. The demonstration creates a true-to-life service provider infrastructure scenario for increasing IPv4/IPv6 routing scalability within the core Multiprotocol Label Switching (MPLS) network while retaining high service levels for end customers. The demonstration will be held in the Brocade booth (# 833) during Interop Las Vegas 2011, at the Mandalay Bay Convention Center.
As service providers evolve to become destinations offering cloud-based services, rather than just basic data delivery, the performance and scalability demands on their networks have increased significantly. The Brocade MLXe Core Router is a 100 Gigabit Ethernet (GbE)-ready solution that enables service providers and virtualized data centers to support these demands by efficiently delivering cloud-based services that use less infrastructure and help reduce expenditures.
In this specific demonstration, Brocade and Ixia will test the IPv4/IPv6 traffic flows, MPLS and throughput capabilities of the Brocade MLXe multiservice router over 10 and 100 GbE connections. By leveraging Ixia's leading test solutions, attendees will be able to view the following:
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron, SAN Health, ServerIron, TurboIron, and Wingspan are registered trademarks, and Brocade Assurance, Brocade NET Health, Brocade One, Extraordinary Networks, MyBrocade, VCS, and VDX are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners.
© 2011 Brocade Communications Systems, Inc. All Rights Reserved.
JeffHebert 060001UEQ2 Tags:  paas storage saas cloud consolidate iaas servers networking virtualize 3,736 Views
JeffHebert 060001UEQ2 Tags:  saas consolidate servers cisco iaas cloud virtualize storage paas network inernet 4,447 Views
Cisco’s apparently going to try to simplify its sales, services and engineering organizations in the next 120 daysBy Maureen O'Gara
Faced with a nasty loss of credibility, a string of poor financial results, shrinking market share in its core business, an unwieldy and alienating bureaucracy blamed for the top executive exodus it been experiencing, and a stock price that's plunged into the toilet Cisco, once an economic bellwether, is promising to do more than simply kill off its once-popular Flip video camcorder business and lay 550 people off, an admission that its foray into the consumer segment had largely failed.
It said in a press release issued Thursday morning that it's going to a "streamlined operating model" focused on five areas, not apparently the literally 30 different directions it's been going in although it did say, come to think of it, something about "greater focus" so maybe it's not really cutting back.
These focus areas are, it said, "routing, switching, and services; collaboration; data center virtualization and cloud; video; and architectures for business transformation."
Nobody seems to know what that last one is and the Wall Street Journal criticized Cisco for not being able to explain in plain English what it's doing and Barron's complained that it needed a Kremlinologist to decrypt the jargon in the press release.
Anyway Cisco's apparently going to try to simplify its sales, services and engineering organizations in the next 120 days or by July 31 when its next fiscal year begins. Well, maybe not everything, it warned, but sales ought to be reorganized by then.
This streamlining seems to mean that:
It's unclear whether any of this means layoffs.
Cisco piped in a quote credited to Moore saying. "Cisco is focused on making a series of changes throughout the next quarter and as we enter the new fiscal year that will make it easier to work for and with Cisco, as we focus our portfolio, simplify operations and manage expenses. Our five company priorities are for a reason - they are the five drivers of the future of the network, and they define what our customers know Cisco is uniquely able to provide for their business success. The new operating model will enable Cisco to execute on the significant market opportunities of the network and empower our sales, service and engineering organizations."
JeffHebert 060001UEQ2 Tags:  sass cloud iaas infrastructure virtualize paas unified converged servers storage /cloud networking 4,444 Views
IBM Systems Storage
Brocade OEM Partners Provide Support for Fibre Channel Fabric Innovation to Enable Cloud-Optimized Networks
JeffHebert 060001UEQ2 Tags:  servers converged paas cloud iaas brocade networking ibm storage saas unified 5,178 Views
JeffHebert 060001UEQ2 Tags:  converge saas storage iaas networking paas cloud unify virtualize gartner 6,416 Views
JeffHebert 060001UEQ2 Tags:  effective cloud storage performance paas iaas global ibm efficient 5,273 Views
Optimization of SAP Infrastructure to result in better performance, low costs and high energy efficiency
EHNINGEN, Germany, - 20 Apr 2011: Today IBM (NYSE: IBM) announced that Audi selected IBM to build a cloud environment for Audi's SAP infrastructure to deliver higher performance, fast and flexible provisioning of SAP applications and capacities, lower infrastructure costs, and to deliver above-average energy efficiency with the ability to enlarge future SAP applications to an almost unlimited extent.
Audi was facing challenges to scale its IT systems by the increased use of business-critical applications in areas such as production and logistics, supplier relationship management and human resources which challenged their IT infrastructure regarding reliability and flexibility.
In April 2010, Audi signed a contract with IBM to rebuild their existing SAP infrastructure, including consolidation and virtualization of the server hardware, process standardization, opportunities for performance-related billing and a much higher operational flexibility. Audi's new SAP Infrastructure solution is based on a new generation of high-performance IBM POWER 7 Servers and IBM database technology (DB2).
"Along with a very high level of reliability and failure safety, the new SAP Infrastructure solution, which we will migrate into a private cloud, substantially lowering energy consumption," said Audi's Lorenz Schoberl, head of IT Infrastructure Services. "The DB2 solution's built-in data compression capability will enable us to save time and reduce costs of storage and archiving."
"We were able to demonstrate that our combination of POWER servers and DB2 will decrease the total cost of ownership over the next four years -- from a business and technology point of view," said Gunter Frohlich, IBM Client Manager for Audi.
The new infrastructure is fully operational and will be managed by IBM in a private cloud environment hosted in Audi's data center.
About IBM Cloud Computing
IBM has helped thousands of clients adopt cloud models and manages millions of cloud based transactions every day. IBM assists clients in areas as diverse as banking, communications, healthcare and government to build their own clouds or securely tap into IBM cloud-based business and infrastructure services. IBM is unique in bringing together key cloud technologies, deep process knowledge, a broad portfolio of cloud solutions, and a network of global delivery centers. For more information about IBM cloud solutions, visit www.ibm.com/smartcloud
For more about IBM, visit www.ibm.com/de/pressroom.
Brocade Leads OpenFlow Adoption to Accelerate Network Virtualization and Cloud Application Development
JeffHebert 060001UEQ2 Tags:  networking servers services saas iaas cloud switch storage paas 4,225 Views
SAN JOSE, CA -- (MARKET WIRE) --
SDN involves several components, one of the most important being standard-based OpenFlow, an emerging standard delivering service providers granular control of their network infrastructures. Brocade will leverage its work in developing OpenFlow across its high-performance service provider portfolio to enable customers to build high-value applications across their networks with greater efficiency and unparalleled simplicity.
Today's service providers and network operators face a number of challenges that require multiple solutions in order to ensure highly efficient and profitable operation. Brocade's goal in working with the
Brocade has developed an OpenFlow enabled IP/MPLS router as part of its service provider product portfolio for application verification and interoperability testing with its partners and customers. Brocade plans to make additional OpenFlow strategy and product announcements later this year. Brocade will initially focus its efforts on delivering solutions that enable the scalability and manageability required in hyper-scale cloud infrastructures.
"Stronger definition of network behavior in software is a growing trend, and open interfaces are going to lead to faster innovation," said
Social Media Tags: Brocade, OpenFlow, NetIron, Storage Area Networks, SAN, IP, Fibre Channel, Ethernet, WAN, LAN, Networks, Switch, Router
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron,
Brocade CTO Named to TechAmerica CLOUD(2) Commission
Commission to Provide Recommendations on Deployment of Cloud Technologies to the United States Federal Government
The commission's mandate is to deliver recommendations to the U.S. government on ways it can effectively deploy cloud technologies and set specific public policies that will help drive further cloud innovation in both the private and public sectors.
Brocade has direct and highly relevant experience in the challenges and opportunities that the CLOUD(2) Commission is addressing by the virtue of its 15 years of experience in building mission-critical data center networks for some of the most demanding IT environments in the world. This experience and expertise has positioned Brocade to address the challenges on moving to more agile, flexible cloud IT models.
The Brocade approach, as defined by its Brocade One™ strategy, is to help its customers migrate smoothly from current networking architectures to a world where information and applications reside and can be accessed anywhere through open, multivendor cloud technologies.
"Brocade is an established leader in building and deploying fabric-based data center architectures, and customers continue to trust their networks to Brocade as they move to highly virtualized and cloud models," said
The commission will make recommendations for how government should deploy cloud technologies and address policies that might hinder U.S. leadership of the cloud in the commercial space. Recommendations for government deployment will be presented to Federal Chief Information Officer
The commission is composed of 71 experts in the field, from both the business and academic worlds. Leading the CLOUD(2) commission are co-commissioners
Also joining co-chairmen Benioff and Capellas representing academia will be
A full list of commissioners is available at http://www.techamericafoundation.org/cloud-commission-commissioners
To learn more about CLOUD(2), please visit http://www.techamericafoundation.org/cloud-commission
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron,
Sreek Iyer 2000001K7N Tags:  cloud cloud-computing tsam stepbystep isdm cloud_computing 1 Comment 6,760 Views
Chapter 14 - Management Platform & Managed Environments
To design a good cloud management platform we need to understand the managed environment. As we know that the workloads would include not only stuff running on virtual infrastructure but also traditional infrastructure. So we need to design a management platform that can support delivery of traditional services as well as cloud services.
The advantage of using IBM reference architecture (refer previous chapter) is that we the service management cost to a minimum and be able to manage multiple services (IAAS, PAAS, SAAS, Traditional Services) through a single management platform (Common Cloud Management Platform).
The design of the management platform is mainly driven by what platforms we need to manage as well as the services we have to deliver. The core components of the management platform are determined by the amount of service automation expected to be provided by the platform.
The cloud management platform can be thought of like a Service Delivery Platform as applied to Telecommunication industries. The term Service Delivery Platform (SDP) usually refers to a set of components that provides a services delivery architecture (such as service creation, session control & protocols) supporting multiple delivery models of service.
The core components can be again classified into the business support (BSS) components and the operational support (OSS) components. The business components include ways to manage the customer, subscription, offering & catalog, contract, order, billing, and financial aspects of the platform. The OSS deals with the backend aspects of fulfilling the service request. So it includes components like service automation, provisioning, monitoring and management.
The IBM Tivoli suite of products supports addressing almost all of the OSS requirements as well as some of the key components in the BSS components. As an architect, the key decisions to take are to look at the capabilities required based on the client needs and create a platform that is extensible. This needs to be done keeping flexibility in mind which means you have the capability to add and remove components to support different capabilities. In an established and mature Data Center, it is highly unlikely that all these components are delivered by a single vendor. That’s why an architecture build on open standards is critical to the success of building a good management platform.
IBM is leading the efforts for adoption of standards by different cloud providers, consumers and tools vendors. The work being done by IBM with Open Group and Cloud Standards Customers Council are some examples for the same.
Once we have determined the functional components of our solution we need to worry about the non-functional requirements. These include aspects like security, availability, resiliency, performance, scalability, capacity planning and sizing. We will need to determine these aspects for the management platform based on the size and heterogeneity of the managed environment. We will discuss these aspects in the next chapter.
JeffHebert 060001UEQ2 Tags:  ibm cloud hybrid paas services saas storage private federal iaas government 3,869 Views
Intel® Cloud Builders Reference Architecture Library
Key challenges and focus areas for IT include enhancing efficiency, security, resource utilization, flexibility, and simplifying data center management, among others. Intel works closely with leading systems and solution providers to deliver proven reference architectures to address IT challenges. This work is based on IT requirements—from a wide range of end users—that address challenges in evolving to cloud and next- generation data centers, including the evolving usage requirements of the Open Data Center Alliance. This lab-based experience is embodied in Intel® Cloud Builders reference architectures. Each reference architecture provides detailed instructions on how to install and configure a particular cloud software solution using Intel® Xeon® processor-based servers.
Developed with ecosystem leaders, the following reference architectures relate to building a cloud, or Infrastructure as a Service (IaaS), and to enhancing and optimizing cloud infrastructure with a focus on security, efficiency, and simplifying your cloud environment.
Learn more about how to build and optimize your cloud infrastructure via reference architecture guides below. Read More>
IBM Joins Forces with Over 45 Organizations to Launch Cloud Standards Customer Council for Open Cloud Computing New user-led group to focus on addressing the challenges and requirements of using an Open Cloud
JeffHebert 060001UEQ2 Tags:  aws cloud private iaas ibm paas hybrid saas google secure 2,997 Views
JeffHebert 060001UEQ2 Tags:  storage efficient servers paas iaas cloud ibm effective networks saas 5,085 Views
In our previous posts on the IT industry’s shift to the Cloud Services era, we’ve provided definitions, market context, user adoption trends, and user views about cloud services benefits, challenges and suppliers.
The development of this forecast involved a team of over 30 IDC analysts, led by Robert Mahowald (Business Applications/SaaS), Tim Grieser (Infrastructure Software), Steve Hendrick (Application Development & Deployment Software), Matt Eastwood (Servers) and Rick Villars (Storage), with additional contributions from David Tapper (Outsourcing/Hosted Services) and John Gantz (Global Research).
SAN FRANCISCO, CA, - 07 Apr 2011: IBM (NYSE: IBM) today unveiled its next generation IBM SmartCloud, an enterprise-class, secure cloud specifically created to meet the demands of businesses.
To accelerate the shift from experimentation, development and assessment to full scale enterprise deployment of cloud, IBM is building out its existing cloud portfolio with IBM SmartCloud, enterprise cloud technologies and services offerings for private, public and hybrid clouds based on IBM hardware, software, services and best practices.
As part of this announcement, IBM is demonstrating a next-generation, enterprise cloud service delivery platform currently piloting with key clients and available later this year. For the first time, enterprise clients will be able to select key characteristics of a public, private and hybrid cloud to match workload requirements from simple Web infrastructure to complex business processes, along five dimensions, including:
· Security and isolation
· Availability and performance
· Technology platforms
· Management Support and Deployment
· Payment and Billing
The IBM SmartCloud includes a broad spectrum of secure managed services, to run diverse workloads across multiple delivery methods both public and private. It includes customer choice with the potential for end-to-end management of service delivery from the server and operating system to the application and process layer.
“The new IBM SmartCloud allows for the best of both worlds – the cost savings and scalability of a shared cloud environment plus the security, enterprise capabilities and support services of a private environment,” said Erich Clementi, senior vice president, IBM Global Technology Services. “In thousands of cloud engagements, we have discovered that enterprise client wants a choice of cloud deployment models that meet the requirements of their workloads and the demands of their business.”
This level of choice and control translates into capabilities customized to your needs and priorities, whether you’re deploying a simple web application, an ordering logistics system or a complete ERP system.
The new IBM cloud can enable organizations, their employees and partners, to get what they need, as they need it – from advanced analytics and business applications to IT infrastructure like virtual servers and storage or access to tools for testing software code - all deployed securely across IBM’s global network of cloud data centers.
The IBM SmartCloud has two implementation options: Enterprise and Enterprise +.
- Enterprise – Available today and expanding on our existing Development and Test Cloud allowing customers to expand on internal development and test efforts with reduction of application development tasks from days to minutes via automation and rapid provisioning with over 30% reduction in costs versus traditional application environments. This offering is available immediately.
- Enterprise + -- To be made
available later this year, Enterprise + will complement and expand on
the value of Enterprise, offering brand new capabilities provide a core
set of multi-tenant services to manage virtual server, storage, network
and security infrastructure components including managed operational
JeffHebert 060001UEQ2 Tags:  it how learn lotus virtualize ibm computing consolidate skills training fundamentals cloud 4,368 Views
Cloud computing fundamentals
Summary: A revolution is defined as a change in the way people think and behave that is both dramatic in nature and broad in scope. By that definition, cloud computing is indeed a revolution. Cloud computing is creating a fundamental change in computer architecture, software and tools development, and of course, in the way we store, distribute and consume information. The intent of this article is to aid you in assimilating the reality of the revolution, so you can use it for your own profit and well being. Learn more>
JeffHebert 060001UEQ2 Tags:  bpaas network effective storage cloud servers efficient saas paas iaas elastic ibm 4,595 Views
Last year’s acquisition policy pronouncements are starting to be felt across to the U.S. Army, with upticks in cloud computing initiatives, increasing use of fixed-price contracts and adoption of social media.
“Army IT spending will remain stable; the goal is to optimize the IT [spending]. Optimization will be guided by computing trends,” said Gary Winkler, Army program executive officer for enterprise information systems.
He was one of several Army acquisition speakers at the AFCEA Belvoir Industry Days conference at the National Harbor in Oxon Hill, Md. Winkler also recently announced he is leaving the Army.
Efforts to improve efficiency, realign spending priorities and streamline a cumbersome acquisition process were launched during the past year amid a tightening national budget by Defense Secretary Robert Gates and Ashton Carter, undersecretary of defense for acquisition, technology and logistics.
Leading the charge for the Army’s efforts to hold down spending and become more efficient are cloud computing initiatives, mobile technologies, data center consolidation and social collaboration, Winkler said.
Winkler said that mobile data traffic is on track to increase by 39 times between 2009 and 2014, and the social software market is showing 40 percent growth per year through 2013 — also contributing to getting the Pentagon’s policies rolling further down in operations.
The Army also wants to increase use of firm fixed-price and multiple-source contracts, as directed in Carter’s Better Buying Power initiative, and looking to maximize broadly scoped contracts that can be used for a variety of missions.
However, there are still plenty of challenges, and there likely will be more to come. Winkler predicted that force reductions could still lie ahead for DOD, citing his own experience in the 1980s when, like now, an insourcing effort was followed by a hiring freeze — which was later followed by layoffs.
“We can tighten our belts and squeeze a little bit [as directed by the Pentagon] — but I think it’s going to be more than just a little bit,” Winkler said.
Still, PEO-EIS has been involved in the development of Better Buying Power tenets, including helping shape concepts and strategies for improving tradecraft services, establishing common taxonomy and reforming IT acquisition — all banner items in Carter’s 23-point acquisition reform plan released last September. Read More>
JeffHebert 060001UEQ2 Tags:  iaas public private consolidation ibm virtualization available reliable paas cloud saas scalable vmware elastic secure 4,508 Views
"Provision public cloud resources or securely extend your internal virtualized infrastructure into the public cloud with VMware and our vCloud Powered service providers, the largest ecosystem of cloud computing partners. Leverage secure hybrid cloud resources with confidence while providing choice and flexibility, ensuring interoperability and portability of workloads between cloud environments with a VMware vCloud infrastructure built on VMware vSphere, VMware vCenter, VMware vCloud Director, and VMware vShield."
JeffHebert 060001UEQ2 Tags:  performance ibm iaas available cloud reliable saas paas scalable 6,544 Views
"Security often comes up as a big stopping point for cloud computing. One of the ways around this is to build a private cloud – one that remains within the corporate firewall and wholly controlled internally.
That was the approach taken by Los Alamos National Laboratory as it seeks to create an infrastructure on demand (IOD) architecture to simplify the rollout of new technology projects and to eliminate delays in storage, server and network provisioning.
Anil Karmel, IT manager at Los Alamos National Lab noted four tenets that played a major role in the private cloud decision:
• green IT
• streamlined operations
• rapid scaleup/down
“As we deploy more virtual servers, we consume far less power and also reduce electronic waste,” said Karmel. “We estimate eventual savings of $1.3 million annually due to IOD.”
Server capacity on demand is now achievable in a few clicks. Instead of 30 days to provision a server, it now takes less than 30 minutes.
The organization is utilizing HP c7000 blade enclosures along with HP Virtual Connect Fibre Channel/Flex 10 Ethernet. HP BL460c and BL490c blades are used, with each blade containing multiple quad-core and six-core chips.
A NetApp SAN was brought in to add storage capacity. This is based on the NetApp V Series with 2 PBs of Tier 2 SATA storage. Tier One is provided by existing HP arrays.
The cloud itself consists of four elements: a web portal at the front end; Microsoft SharePoint as the automation engine for cloud workflows, and also as the integration point for functions such as chargeback; VMware vCloud Director to manage and operate the cloud; and VMware vShield to provide security at both the application level and at the user device level.
“Any virtual environment has to be cost effective, so that means it has to be simple while being aware of any and all changes in real time,” said Karmel.
This is especially important in the security arena. Traditional security operates at the hardware or software layer. But the addition of a virtualization layer, said Karmel, provides too many gray areas for such security tools to operate effectively. Hence security itself is now being virtualized to eliminate yet another wave of security holes showing up in the corporate networks.
Using Infrastructure on Demand, the National Lab is creating virtual security enclaves using vShield that prevent one desktop or client from infecting others, and keeps virtual machines (VMs) out of harm’s way. Rules are set indicating access rights, as well as security protocols based on threat detection. Traditional security tools interface with this virtual security layer to keep servers and devices more protected. Any time a threat is detected, the offending virtual computer is sent to a remediation area, which has no network connectivity with which to propagate malware.
“This all occurs automatically based on preset policy,” said Karmel. “If a VM is moved from one host to another, the security policy given to it moves with it.”
To prevent VM sprawl, VMs are given an expiry data. This is one year by default, though that can be adjusted. 30 days before the due date, an email is automatically generated asking the VM owner about renewal.
Another similar email is relayed with 10 days left and then again the day before expiry. As soon as the VM is turned off, the user is informed of the fact and asked if he/she wants it back on line. Even then, 29 days later, the user is told that VM is scheduled for deletion. The next day it is deleted.
However, a backup is retained for seven years just in case. The NetApp storage is used to create snapshots of VMs before they are retired to tape. For now, restores are not automated. But in the next version of Infrastructure on Demand, users will be able to restore VMs they desire in a few clicks. “Lifecycle management of VMs is very important,” said Karmel.
The organization has erected a chargeback structure. Cloud resources are priced according to CPU, RAM and disk. Users can see the total cost before submitting a request for IT resources. Following a request, the line manger has to approve and accepts the charges to that unit.
“You have to build best practices around our workloads,” said Karmel. Service Level Agreements (SLAs) are set at four 9’s. If some hardware goes down and Infrastructure on Demand doesn’t meet the SLA, it doesn’t charge for that resource for that month. In addition, uptime and availability metrics are regularly published so users are fully informed.At the moment, separate network, security and virtual server teams are being maintained to monitor the infrastructure. Over time, this may be streamlined to one centralized unit."
Chapter 13 - Cloud Computing Reference Architecture
One of the important things to decide when you discuss Cloud Service Strategy and Design is the consideration for a Reference Architecture. This is something that is useful to align to as it represents the blueprint for your cloud and make the implementation risk free. The Cloud Computing Reference Architecture (RA) is intended to be used as a blueprint / guide for architecting cloud implementations, driven by functional and non-functional requirements of the respective cloud implementation. The RA defines the basic building blocks - architectural elements and their relationships which make up the cloud. The RA also defines the basic principles which are fundamental for delivering & managing cloud services.
The reference architecture is more than just a collection of technologies and products. They consist of several architectural models and are much like a city plan. The RA defines how your cloud platform should be constructed so that it can satisfy not you’re your current demands and but also be extensible to support the future needs of a diverse user population. So this blueprint should be responsive to changing business and technology requirements and adaptable to emerging technologies. Existing “legacy” products and technologies as well as new cloud technologies can be mapped on the AOD to show integration points amongst the new cloud technologies and integration points between the cloud technologies and already existing ones. By delivering best practices in a standardized, methodical way, an RA ensures consistency and quality across development and delivery projects.
The IBM Cloud Computing RA is structured in a modular fashion with each functional capability (architectural elements), the user roles (that we discussed in Chapter 12) and their corresponding interactions. The IBM CCRA is created based on several cloud engagements and incorporates all the good practices and methods implemented across these projects. So for an end user adopting these good practices the risk and cost of implementation of their cloud will be low. The CC RA is built on the ELEG ( Efficiency, Lightweightness, Economies-of-scale, Genericity) principles.
One of the principles that I want to highlight here is the Genericity Principle – That’s the capability to define and manage generically along the Lifecycle of Cloud Services: Be generic across I/P/S/BPaaS & provide ‘exploitation’ mechanism to support various cloud services using a shared, common management platform (“Genericity”). As we know or discussed in the cloud delivery and deployment models (Chapter 3) there can many models for deployment and delivery of a Cloud Services. As we know Cloud Service can represent any type of (IT) capability which is provided by the Cloud Service Provider to Cloud Service Consumers - Infrastructure, Platform, Software or Business Process Services. The beauty and significance of the IBM Cloud Computing Reference Architecture is that it can cater to any of these service delivery and deployment models. So if you are building your private cloud or public cloud or using cloud to deliver IAAS, PAAS or SAAS the RA remains the same and handle all of these combinations. We have seen the capabilities that we need (Chapter 6) for implementing a common cloud management platform.
IBM has recently submitted the IBM Cloud Computing Reference Architecture 2.0 (CC RA) (.doc) to the Cloud Architecture Project of the Open Group, a document based on “real-world input from many cloud implementations across IBM” meant to provide guidelines for creating a cloud environment. Check out this link which has the interview with Heather Kreger, one of the authors of Cloud Computing Reference Architecture as well as the details of the components that make up the CCRA.
On the topic there is also an article that I found on syscon cloud computing journal which is comparing the Reference Architecture of the Big Three ( IBM, HP and Microsoft) which is an interesting read.
before we get into the details of the Service Implementation / Transition phase
it is important that we understand the bigger picture. The word document IBM
Cloud Computing Reference Architecture 2.0 (CC RA) (.doc) provides a great
description of this bigger picture and going into the details as required. The
architectural principles define the fundamental principles which need to be
followed when realizing a cloud across all implementation stages (architecture,
design, and implementation). This is a must read for all - development teams
implementing the cloud delivery & management capabilities as well as
practitioners implementing private clouds for customers.
-By the End of the Decade One in Four UK Power Stations are Set to Close and UK Gas Production is Expected to be Half of Current Levels, yet Demand for Electricity is Expected to Increase by More than 50 Per Cent by 2050
-This Collaboration is to Create a Flexible, Secure and Scalable Data and Communications Hub to Support the UK Government's Smart Meter Implementation Programme and its Strategy to Cut Emissions by 80 Per Cent by 2050
LONDON - 21 Mar 2011: IBM (NYSE: IBM) and Cable&Wireless Worldwide (LSE: CW.L), today jointly announce their collaboration to develop a new intelligent data and communications solution, UK Smart Energy Cloud, to support the UK's Smart Meter Implementation Programme, which aims to rollout more than 50 million smart meters in the UK.
UK Smart Energy Cloud has the potential to provide a complete overview of energy usage across the country and pave the way for easier implementation of a smart grid. The solution will utilise the extensive experience IBM has gained from leading and implementing smart grid programmes around the world and its proven enabling software and middleware. The solution will be supported by C&W Worldwide's extensive, secure next-generation network and communications integration capability.
There has never been a more challenging time for the energy industry with decisions being taken to protect the country's energy supply that will have significant implications for everyone in the UK. Both smart meters and the smart grid are significant steps on the journey to a new energy future, potentially changing for the better the way we consume and distribute energy.
JeffHebert 060001UEQ2 Tags:  scalability ibm iaas server storage reliability network saas cloud power paas performance 3,367 Views
The unprecedented interest and projected IT spend on cloud computing is coming from all types of organizations, businesses and governments that are seeking to transform the way they deliver IT services and improve workload optimization so they can quickly respond to changing business demands. Cloud computing can significantly reduce IT costs and complexities while improving asset utilization, workload optimization and service delivery.
Today’s IT Infrastructures face challenges on many levels:
As a result of these challenges, organizations are demanding an IT infrastructure and service delivery model that enables growth and innovation. An effective cloud computing environment built with IBM Power Systems™ cloud solutions helps organizations transform their data centers to meet these challenges:
Power Systems cloud solutions enable customers to build an effective cloud computing environment, enabling organizations to reduce IT costs, improve service delivery and enable business innovation.
ARMONK, N.Y. - 24 Mar 2011: IBM (NYSE: IBM) today launched new, cloud-based software designed to help marketers gain real-time, actionable insight from data available across social media channels.
The new software expands IBM's business analytics capabilities by enabling organizations to develop faster, more precise social media marketing programs that support their brand's total online presence through a cloud-based delivery model.
The first product, IBM Coremetrics Social, helps companies analyze the business impact of their social marketing initiatives, while IBM Unica Pivotal Veracity Email Optimization Suite analyzes email links that are shared across social network platforms, enabling marketers to better capitalize on opportunities across channels.
Today's news follows IBM's recent announcement of new software and the creation of a new consulting practice dedicated to the emerging category of "Smarter Commerce," which is focused on helping companies swiftly adapt to rising customer demands in today's digitally transformed marketplace. Smarter Commerce includes new cloud analytics software that enables companies to monitor their brand's presence in real-time through social media channels to better assess the effectiveness of new services and product offerings, fine tune marketing campaigns, and create sales initiatives in real-time.
"IBM's approach to social media analytics is based on the understanding that people interact with an organization's brand in a number of ways—including email, social networking sites and company Web sites—and the true measure of business impact demands a fully integrated view of the interaction with these resources," said John Squire, chief strategy officer, IBM Coremetrics. "The new social media analytics software unveiled today will help marketers develop more targeted, highly-measurable, and effective social media marketing campaigns."
IBM Coremetrics Social enables organizations across a wide range of industries to measure the effectiveness and return on investment (ROI) of their social marketing initiatives by gaining insight from data that's publicly available on social media websites.
This Smarter Commerce offering delivers real-time intelligence on the social media response to a particular brand, or the products, content and services being offered, and enables clients to make fact-based, accurate decisions about marketing expenditures. As a result, marketing teams can easily attribute business impact to social referrals in the context of other marketing programs.
Using the analytics foundation of the Coremetrics Continuous Optimization Platform™ and its complete suite of marketing optimization applications, IBM Coremetrics Social provides cross-channel reporting and benchmark capabilities to track and improve social marketing campaigns. With social benchmarking, brands can evaluate the effectiveness of their social initiatives relative to their peer companies, and understand where they excel, and where there is opportunity for improvement.
It has become routine for social networks to be used as a resource to broadly share links to special offers made available by companies via email. Well-known brands can expect to see as much as 38 percent of their special offer email links shared across social networks. An average of 28 percent of these links is then 'liked' or commented on.
The new IBM Unica Pivotal Veracity Email Optimization suite tracks and analyzes email links that are shared across social network platforms, delivering actionable insights which marketers can turn into recognizable profit. Unlike other technologies, this new offering opens the doors for marketers to identify, track, and improve the perception of their brands across channels. The Social Email Analytics software tracks all links associated with a marketer's brand and email, not just the intended links a marketer shares. This approach better encompasses and reflects the emerging complexities and ramifications of consumer interactions with brands, starting with email and ending up in the social realm. With this new software, marketers can also hone Web pages for social networks and better identify opportunities across channels.
For more information on IBM's Smarter Commerce initiative, please visit: http://www-03.ibm.com/press/us/en/presskit/33983.wss
For more information on Coremetrics, an IBM Company, please visit http://www.coremetrics.com/For more information on Unica, an IBM Company, please visit http://unica.com/
JeffHebert 060001UEQ2 Tags:  reform federal saas iaas government paas storage enterprise cloud servers it network 3,987 Views
Vivek Kundra has been an impact player. Since joining the Obama administration as the government’s first CIO, Kundra has been in constant motion, championing one initiative after another, including cloud computing, transparency, metrics and data center consolidation.
But in December 2010, Kundra got everyone’s attention — inside the Washington Beltway and beyond — when he rolled out the administration’s much-anticipated 25-point plan for reforming IT management. The initiative, which pulls together some ideas that have been floated before, provides an IT road map for the next two years. It focuses on shorter procurement cycles, better program management and improved government/industry communications.
Kundra got kudos for spearheading an extensive outreach effort that gave industry groups and agency stakeholders ample opportunity to weigh in on the plan.
Read more about the 2011 Federal 100 award winners.
* Press room * Press releases IBM Expands the Institute for Electronic Government in Washington to Focus on Advancements in Analytics and Cloud Computing
JeffHebert 060001UEQ2 Tags:  paas federal saas cloud strategy government state iaas ibm 3,099 Views
IBM Expands the Institute for Electronic Government in Washington to Focus on Advancements in Analytics and Cloud Computing
Virtual Collaboratory to Connect Thousands of Government Leaders Globally
WASHINGTON - 01 Mar 2011: IBM (NYSE: IBM) today announced a major expansion of its Institute for Electronic Government (IEG) in Washington, D.C., adding cloud computing and analytics capabilities for public sector organizations around the world.
IBM has moved and expanded the facility in order to meet the growing demand from Government, Health Care and Education leaders who recognize the potential of cloud computing environments and business analytics technologies to improve efficiencies, reduce costs and tackle energy and budget challenges.
According to recent IBM surveys of technology leaders globally, 83 percent of respondents identified business analytics -- the ability to see patterns in vast amounts of data and extract actionable insights -- as a top priority and a way in which they plan to enhance their competitiveness. In addition, an overwhelming majority of respondents -- 91 percent -- expect cloud computing to overtake on-premise computing as the primary IT delivery model by 2015.
The institute provides insights and expertise on emerging technology solutions, drawing on IBM researchers, experts in advanced software platforms, and consultants with deep industry knowledge in areas such as government, health care, transportation, social services, public safety, customs and border management, revenue management, defense, logistics, and education. Read More>
ARMONK, N.Y. & BENGALURU, India - 04 Mar 2011: Today IBM (NYSE: IBM) and The Karnataka Vocational Training and Skill Development Corporation (KVTSDC), an organization within the Department of Labour in India's fastest growing state, announced a new partnership to help millions of citizens find work using their mobile devices. Once created, this technology could be applied in emerging economies around the world.
The World Wide Web has provided unfettered access to information, opened new business and employment opportunities, transformed the way we communicate, helped eliminate geographical barriers and paved the way for global collaboration and integration. But in many of the world's most rapidly growing economies, there is a lack of affordable access to personal computers and the Internet – and in rural areas in particular, widespread illiteracy compounds this gap.
Today in India only 7 percent of the population has access to the Web, but at the same time mobile phones and services are becoming increasingly affordable and reliable, creating the emergence of a Mobile Web and opening the door for citizens to access important government services through their phones.Read more>
JeffHebert 060001UEQ2 Tags:  paas center available smarter data cloud saas scalable planet iaas reliable ibm 3,715 Views
-By the End of the Decade One in Four UK Power Stations are Set to Close and UK Gas Production is Expected to be Half of Current Levels, yet Demand for Electricity is Expected to Increase by More than 50 Per Cent by 2050
-This Collaboration is to Create a Flexible, Secure and Scalable Data and Communications Hub to Support the UK Government's Smart Meter Implementation Programme and its Strategy to Cut Emissions by 80 Per Cent by 2050
LONDON - 21 Mar 2011: IBM (NYSE: IBM) and Cable&Wireless Worldwide (LSE: CW.L), today jointly announce their collaboration to develop a new intelligent data and communications solution, UK Smart Energy Cloud, to support the UK's Smart Meter Implementation Programme, which aims to rollout more than 50 million smart meters in the UK.
UK Smart Energy Cloud has the potential to provide a complete overview of energy usage across the country and pave the way for easier implementation of a smart grid. The solution will utilise the extensive experience IBM has gained from leading and implementing smart grid programmes around the world and its proven enabling software and middleware. The solution will be supported by C&W Worldwide's extensive, secure next-generation network and communications integration capability.Read More>
JeffHebert 060001UEQ2 Tags:  emc saas hds virtualize cloud paas iaas ibm netapp hp 5,620 Views
ARMONK, N.Y., - 08 Dec 2010: IBM (NYSE: IBM) today announced the availability of new online software services based on the same on-premise solutions used by clients today – now delivered as a monthly subscription offering - that enables better automation and control of IT Service Desk functions. This new service adds to IBM's software-as-a-service offerings that help automate a range of IT services critical to maintaining business operations.
Even small and mid-size companies deal with labor-intensive services for employees such as resolving IT issues, fixing laptops and onboarding new hires. Many companies struggle with slow, inefficient service request handling because at the core their networking, facilities, application support and IT assets aren't integrated and typically depend on manual updates. For example, IBM estimates that only five percent of service and support issues are resolved by self-service, making automation and integration crucial for service management.Learn More>
JeffHebert 060001UEQ2 Tags:  disk dedupe ibm paas nas virtualize cloud iaas storage san saas 3,792 Views
IBM offers three types of cloud solutions, for storage and other services: Smart Business on the IBM Cloud, Smart Business Cloud services, and Smart Business Systems.
− Smart Business on the IBM Cloud are standardized services provided by IBM on a pay-per-use basis.
− Smart Business Cloud services are private cloud services, behind your firewall, built and/or run by IBM
- Smart Business Systems are purpose-built, integrated Service Delivery Platform solutions
IBM also offers cloud consulting to help plan and convert applications to the cloud model.
JeffHebert 060001UEQ2 Tags:  cloud emc hp storage servers ibm network virtualization 3,477 Views
IBM expands its virtualization, image management and cloud computing leadership with major technology breakthroughs
LAS VEGAS, - 01 Mar 2011: PULSE 2011 -- IBM (NYSE: IBM) today showcased a series of technology breakthroughs that extend its leadership capabilities in virtualization, image management and cloud computing, including software that can virtualize a data center within minutes to instantly meet business demand.
These new technologies build on IBM's existing provisioning and image deployment capabilities that help clients better manage virtualized cloud environments to achieve greater business efficiency, agility and innovation while controlling costs.
According to IDC, $17 billion was spent on cloud-related technologies, hardware and software in 2009. IDC expects that spending will grow to $45 billion by 2013.(1)
The demand for cloud computing is exploding as organizations seek to expand the impact of IT to deliver new and innovative services while realizing significant economies of scale. The power of the cloud computing model is the ability to harness varying technology investments by enabling rapid and dynamic scheduling, provisioning and management of virtualized computing resources on demand.
IBM has helped thousands of clients adopt cloud models and manages millions of cloud based transactions every day in areas as diverse as banking, communications, healthcare and government, and securely tap into IBM cloud-based business and infrastructure services. By offering proven solutions to accelerate the deployment of advanced infrastructure virtualization with capabilities to visualize, control, and automate these infrastructures, IBM helps global organizations optimize their ROI from technology.Read More>
Chapter 12 - Cloud Users & Roles
There are several actors typically involved in cloud solutions from a business perspective. Their roles and responsibilities and their relationships with other actors would vary based on the industry. The business actors responsibilities is to make appropriate cloud investment decisions. Once an organization has started with cloud, then are some typical actors that are involved in the day to day operational consumption and provision of cloud services. This chapter is more focused on the latter and not on the business actors which typically includes the people like CIO/CTO/COO, Business Operations Controller as well as Procurement Managers.
Following are some of the key organizations that are typically involved in a cloud solution. The actors and roles are then defined for users under each of these key organizations.
Cloud Service Consumer: The service consumer is the end user or enterprise that actually uses the cloud service.
Cloud Service Provider: The service provider delivers the service to the consumer.
Cloud Service Creator / Developer: The service developer creates and publishes the cloud service.
These provider organizations, the typical roles and their associated activities is discussed in detail in the Cloud Use Cases Whitepaper and Dave Russell has an open thread on Cloud Computing Central to discuss these in detail.
Out of all the roles across all these organizations, the key roles from an implementation and operation perspective are the following.
Cloud Administrator who can perform the following tasks:
Cloud User who can perform the following tasks:
Accordingly Tivoli Service Automation Manager provides two different user interface for these two different and key roles for the cloud – An administrative User Interface and a self-service user Interface. Find details here.
There are variations of these two roles depending on the Cloud Provider and Consumer Organization design. These are roles like
Team Administrator role can perform the tasks for a group of users like creating and maintaining user accounts as well as placing requests on behalf of the project.
These business specific roles then need to be mapped to application roles like Service Administrator, Service Definition Designer/Manager, Service Deployment Operator and Manager, etc. The security framework implementation should take care of these roles mapping. The security function of Tivoli Service Automation manager enables to manage which users can log into the user interface and which applications each user can access. The broader discussion on security specifically authentication followed by authorization shall be discussed as a separate chapter.
RHyman 06000032P4 Tags:  #ibmpartners ibmcloud cloud-specialty iaas cloud-services developerworks business-partners cloud-computing ibmpartners saas cloud ibmontwitter cloud_computing specialty #ibmcloud 6,092 Views
Today IBM announced new cloud computing initiatives for Business Partners. One called the IBM Cloud Computing Specialty - a single program to develop the IT industry's broadest ecosystem of companies working together to provide a wide range of cloud computing services and technologies for clients of all sizes and industries. The second, the IBM Software Value Plus Cloud Computing Authorization for software resellers.
Both these initiatives are complementary. IBM Business Partners with an SVP Cloud
Authorization will have completed the IBM Software skills required for the Cloud Specialty. While the IBM Cloud Specialty focuses on the
development and promotion of top cloud Business Partners, the new authorization is an
extension of the IBM Software Value Plus program, specifically for IBM software Business Partners that have built and demonstrated specialty skills, and then
receive financial incentives as resellers of IBM's software portfolio.
You may recall the recent IBM developerWorks survey of more than 2,000 IT professionals worldwide showed 91 percent believe cloud computing will overtake on-premise computing as the primary way organizations acquire by 2015. Industry analysts have also said that the cloud opportunity is expected to more than double in the next few years.
And IBM developerWorks continues to be committed to being your source for the technical resources to build your cloud skills to ensure you can participate in the coming opportunities. The Cloud zone on IBM developerWorks offers the ability to collaborate with peers to solve your development issues and excel with cloud computing so that you can be in lock step with the new opportunities that are expected to arise with the growing cloud computing opportunity.
It's a exciting space, grow your knowledge to participate in the smarter planet.
Chapter 11 – Self Service Portal & Service Catalog
One of the key aspects of cloud service management is the automation to ensure that you can manage huge and growing infrastructures while controlling cost and quality. To attain this goal, we need a Self Service Portal and a Service Catalog. Results show that with these components in place the wait time for services have decreased by an average 98%.
Traditional processes would require you to fill out a paper and put it through the approval processes. Finally the capex is approved and the order is placed for the hardware and software. Also you will be required to constantly followup with the IT Provider teams to know the status of the hardware/software availability, their installation and provisioning, etc. Most often even if all the details are provided correctly upfront, there are chances of errors in the hardware and software provisioning as the process is manual.
With the Self-Service Portal these requests and their tracking are automated. You can track the status of the workflow Online. Ask for services when you need them and most of it is provisioned automatically through workflows implemented. There is less chance for error and faster provisioning with Self-Service Portal and the automation.
Thus the Self-Service GUI allows end users to request IT Resources and optionally automatically fulfill that request.
Tivoli Service Automation Manager provides a set of pre-defined services for Virtual Server Management. These are available as part of a service catalog that is accessible to end user through the Self-Service UI. The Self-Service Virtual Server Management functionality addresses a long-standing need by data centers to efficiently manage the self-service deployment of virtual servers and associated software. Using a set of simple, point-and-click tools, an end user can select a software stack and have the software automatically installed or uninstalled in a virtual host that is automatically provisioned.
These tools integrate with IBM Tivoli Service Request Manager to provide a self-service portal for reserving, provisioning, recycling, and modifying virtual servers, and working with server images, in the following platform environments in a virtualized non-production lab (VNPL). This functionality ensures the integrity of fulfillment operations that involve a wide range of resource actions.
These capabilities enable you to achieve incremental value by adopting a self-service virtual server provisioning process, growing and adapting the process at your own pace, and adding task automation to further reduce labor costs around defined provisioning needs.
Before users in the data center can create and provision virtual servers, administrators perform a set of setup tasks, including configuring the integration; setting up the virtualization environments managed by the various hypervisors and running a Tivoli Provisioning Manager discovery to discover servers and images across the data center.
After this initial setup has been completed, the administrator associates the virtual server offerings with Tivoli Provisioning Manager virtual server templates. In addition, the Image Library is used as the source for software images to be used in provisioning the virtual servers.
Data center users who have Cloud Admin rights can use the Service Automation Manager Offering Catalog application to create and provision virtual server deployments.
The Offering Catalog application contains all the offerings that are available to the end user. There are steps that you need to perform on the catalog that will make specific offerings visible to specific end user groups. The end user interface is a Web 2.0 interface which can be edited to expose it via a Service Catalog. The Web 2.0 UI is designed in an extensible, modular way that allows for programmatically extending it.
Tivoli Service Automation Manager defines security groups that are used to provide role-based functions that can be performed via the administrative user interface or the self-service user interface. We will discuss the User access management for the Self-Service Virtual Server Provisioning component in the next chapter.
Chapter 10 – Cloud Service Design using Tivoli Service Automation Manager
When we are building a solution for a certain kind of IT service, the design should cover two important parts.
Tivoli Service Automation manager support both these models and concepts that are aligned around the ITSM service life-cycle.
The structural model describes how the service to be managed looks like while the operational model defines what processes can be executed on the service. The structural model of the Tivoli service automation Manager defines all the components that make up a service as well as their relationships between each other.
The Service Topology application allows the representation of the service in terms of hardware servers and their associated software. The primary data the Service Topology application operates on are topology and topology node objects. The application provides a means for viewing and editing the same.
The operational model defines all the management processes that can be run on the service described by the structural mode in particular the processes that are subject to automation. This is done as a process model for a service typically that contains process templates which can be instantiated for various stages of a service's life cycle including creation, modification of a deployed service, etc. Each of the processes defined in the process model – Tivoli Service Automation Manager uses the term Management Plans – which is basically a definition of a sequence of tasks performed on the service's components aimed at achieving a certain management goal. Each management plan represents a specific process or action to be taken with respect to an instance of a service definition. The Management Plan also provides means for describing where input data for each task comes from, and where output data of a task shall be stored for further processing.
Service Definitions are used to capture the design of a service both from a structural point
of view and from a process-centered point of view. Upon an end-user request, new Service Deployment Instances can be built based on the model captured in the respective Service Definitions. Those Service Deployment Instances are used by Tivoli Service Automation Manager to deploy and manage services in the real world.
Finally once the design of a service being automated is completed, offerings can be created and published into Service Catalogs. Services implemented in Tivoli Service Automation Manager can be exposed for end-users so they are accessible in an easy way, based on the notion of service catalogs and service offerings.
Chapter 9 – Cloud Service DesignOnce you have installed and setup your management platform, we are ready to start with designing and delivering the cloud services using the platform.
SOA & CloudWe use the same principles of Service Oriented Modeling and Architecture (SOMA) that links business intent with its realization through IT for Cloud Services modeling as well. In SOA, we use the business process models to understand a series of sequentially organized business activities, events that trigger them, roles that perform them, inputs, outputs, control points, etc… As discussed in the Service Strategy section, we look to design the Cloud Services which are better aligned to business requirements
As in SOA, for service identification and design one could take any of the following approach.
In a top-down approach development generally usually starts with a high-level business and structural modeling of the service. Then you also define the management processes that are required service to be in operation. The top-down approach is further characterized in that no or only few automation or fulfillment assets exist when starting with the solution design. Design and implementation of those assets, including their interface and granularity, will be driven primarily from the high-level automation model. The advantage of the top-down approach is a clear design of the service to be automated, including the structural and operational model.
The bottom-up approach is usually characterized by a large number of automation assets that already exist. This may be in the form of many scripts or workflows already exists. In bottom-up approach, we take these low level assets and abstracting them as a cloud service.
Practically we might go with a combination of both approaches mentioned above as the meet-in-the-middle approach.
We model the service so we could learn, capture, and abstract details about “things,” their structures, relationships between them and, often, their behaviors (collaborations, states). All the factors that we consider during modeling a service in SOA are very much applicable for a cloud service too. These include but not limited to
The ABCs of Service Design for Clouds by David Linthicum is good article which discusses where SOA meets Cloud.
Service Management & Cloud
Now lets discuss the same from the Service Management / ITIL perspective. Cloud services have a lifecycle that maps to this service management lifecycle.
The Service Design phase includes the service definition, creation of the service and registering the same into a catalog. We will look at how these can be done using Tivoli Service Automation Manager in the next Chapter.
Service Design is a critical step that delivers the following benefits
Chapter 8 – Cloud Service Strategy
As discussed in Chapter 5, IBM Integrated Service Management provides the software, systems, best practices and expertise needed to manage infrastructure, people and processes—across the entire service chain—in the data center, across design and delivery, and tailored for specific industry requirements. The Service Management Goals are the following
These principles and goals are the same for Cloud Service Management as well. End to End Service Management includes the following steps.
Cloud Maturity and Readiness
Cloud Service Strategy is mainly about deciding what services do we want to deliver and how do we ensure competitiveness of providing the same through cloud. Today’s clients are seeking to utilize their assets to enable business innovation. The service strategy is all about choosing from across multiple compute / deployment models. We needed to access current IT infrastructure and need to identify and evaluate the set of capabilities for their readiness to move to cloud.
Selecting between the Cloud Deployment Models
For mission critical workloads that drive business innovation a private cloud is preferred. For secondary workloads and supporting business functions a public cloud is suitable. While public cloud delivers select set of standardized business process, application and/or infrastructure services on a flexible price per use basis focused on utility, the private cloud drives efficiency, standardization and best practices while retaining greater customization and control with focus on innovation.
When doing Service Strategy, you need to consider the expertise across industries and standards. At this Service Strategy phase, we normally consider reusing/leveraging solutions based on industry best practices including ITIL, CoBIT, eTOM, and ISO.
Calculating the ROI
Cloud Computing ROI is the important consideration/step during the Service Strategy phase. This includes you verifying the following fundamental aspects related to making a service available on the cloud.
There are several ROI frameworks and methods available that allows you validate the approach/strategy against these three fundamental aspects. Most of the service companies would have their own frameworks which are typically Intellectual Capital of their service teams.
Choosing the right Delivery Models and Workloads
Based on the Enterprise Architecture approach, we need to choose from the many available options of delivery models and work load. This includes the services and consulting engagement to obtain clarity on business drivers (business vision, strategy, timeline, business model, and business operating model) and how they can leverage technology and value enablers from cloud computing. Then in this cycle you also need to identify the right set of workloads to move to cloud that fetches maximum benefits from cloud computing. The flexibility that the business operating model gets to innovate on the business model is another key consideration. This could be iterative effort of identifying candidates and then slowly moving them to production.
One of the biggest challenges to utilize cloud computing in your organization is where to start and how to focus your efforts. IBM provides a Cloud Adoption Advisor to get started on the topic. The opengroup has also published a whitepaper on building return on investment on cloud computing.
Key Benefits from Service Strategy
Chapter 7 - IBM Tivoli Service Automation Manger – Architecture Overview
Each of the integrated capabilities required to implement service management for the cloud is provided by IBM Service Automation Manager (referred as TSAM in this chapter). TSAM supports the cloud through all the phases of the entire service lifecycle. The steps include
For supporting these phases it provides the following capabilities.
Each of these capabilities are delivered by discrete components within TSAM
A quick view of the architecture will help you understand that how these capabilities are provided by seamlessly by multiple components underneath TSAM.
Figure 1 Tivoli Service Automation Manager - Architecture Overview
Below are the key components and responsibilities
Tivoli Service Request Manager
Tivoli Service Automation Manager (Service Design)
Tivoli Provisioning Manager
Even though I would like to go into details on each component as part of this post, I'm not going to do so because as discussed in the initial post, the objective of this blog is to
provide the reader with the pointers to the content they need and not to repeat the same already
available elsewhere. So you can read more about the TSAM
Architecture on the TSAM
wiki on developerworks.
I’m including the list of software bundles for TSAM 7.2.1 to get a better understanding of the components involved.
Again, the TSAM infocenter provides more details on each of the typical hardware and software requirements and the related topics.