Implementation details about the Microservice can be studied in the source code by loading the project into your preferred Java IDE such as Eclipse.
Before the Microservice can be run inside Docker, the Docker technology must be installed on your local machine. You can follow step-by-step Docker installation procedure at: Docker Installation
Once Docker is installed correctly, you can test your installation using the following command:
docker run hello-world
Create a Microservice Docker Image
In the Docker ecosystem, there are two main concepts to understand.
Docker container: A Docker container is a lightweight instance of a Linux based OS running on top of your host Operating System
Docker image: Docker image represents your Application software + entire environment running inside a container
For the above microservice, the container loads the microservice image, and as part of this image it not only loads the Application Code for the microservice, but also the Java 8 environment it needs to run the microservice.
But, before you can load the microservice into Docker, you need to create a Docker image for that software. The steps to create the image are as follows:
Create a directory next to your microservice project
Copy microservice artifacts to the build directory
CMD java -jar hello-microservice-1.0-SNAPSHOT.jar server hello-microservice.yaml
From the Docker session, goto the hello-microservice-build directory and issue the command
docker build -t hello-microservice-local .
The Docker build process uses a file named Dockerfile to get its instructions about what to do when building an image. In this particular microservice, the Dockerfile instructs the Docker system to download an image called 'java:8'. This is the core infrastructure needed to run the microservice. Next it adds the microservice jar and configuration to the image. And later, it exposes the ports 9000 and 9001 to service the requests.
docker build -t hello-microservice-local . (is the command that processes the Dockerfile and produces the hello-microservice-local image)
Note: make sure this command is issued from the Docker session and not just any command line session.
Once this Java Microservice Docker image is created, it must be run inside a Docker container using the following command:
docker run -p 9000:9000 --name hello-microservice-local -t hello-microservice-local
With the recent exploration of cloud computing technologies, organizations are using cloud service models like infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS) along with cloud deployment models (public, private and hybrid) to deploy their applications.
There is a concept in the cloud world that is based on application characteristics: the concept of cloud-enabled and cloud-centric applications. In this blog post, Dan Boulia provides a concise explanation about the concept.
You can say that a cloud-enabled application is an application that was moved to cloud, but it was originally developed for deployment in a traditional data center. Some characteristics of the application had to be changed or customized for the cloud. On the other hand, a cloud-centric application (also known as cloud-native and cloud-ready) is an application that was developed with the cloud principles of multi-tenancy, elastic scaling and easy integration and administration in its design.
When developing an application that will be deployed in the cloud, you must keep the cloud principles in mind. They should be taken into account as part of the application. So we come to the first point: Is it better to work within an existing application or to completely redesign it? There is no exact answer because it depends. You have to evaluate the level of effort (labor, time and cost) to transform the application into cloud-enabled versus the effort to completely redesign it to a cloud-centric application.
The second point is: Will my cloud-enabled application work better than a new cloud-centric application? Here I would say no. It’s rare to find an existing traditional application that was developed with any of the cloud principles in mind. It may be possible to construct the same feel (for the user) as a cloud-centric application, but it will not function the same way internally.
Changing an existing application could be easier since you already have the skills and tools in the organization and you won’t need to learn any new technology. However, while it may be easier to change the application, in the long term it will be harder to maintain. New technologies (social media, mobile, sensors) continue to appear and it is becoming more important to integrate them. Doing this will require additional and continuous effort and may exponentially increase development and supporting costs.
Now comes the third point: What can you use to help expedite the move or redevelopment of an existing application to a cloud-centric model? Many cloud companies have development tools that can help an organization on this path. For instance, IBM has recently announced IBM Bluemix, a development platform to create cloud-centric applications. Shamim Hossain explains the capabilities in more detail in his blog post. Another option is to use IBM PureApplication System to expedite the development.
I discussed some points here that I hope can provide a better understand about an important concept in cloud computing and how to address it. Let me know your thoughts on it! Follow me at Twitter @varga_sergio to talk more about it
Come to the first Cloud Foundry Meetup in the Waltham area this coming Wednesday, December 11th!
This meetup is your opportunity to learn more about Cloud Foundry and meet people excited about the technology.
On the agenda is an Introduction to Cloud Foundry: the technology and the community by Chris Ferris of IBM.
This will be followed by a talk by Renat Khasanshyn of Altoros on Implementing Cloud Foundry 2.0.
More information at: //bit.ly/1azS5PX
Managing software and product lifecycle integration has always been a challenge and with the rate of the new demands on the enterprise the challenges are increasing. Leaders from different standards organizations and industry will lead interactive discussions on the importance of open technologies to help enterprises manage the lifecycle activities within their environments. Learn about the direction lifecycle integration is taking as a result of the inclusion of open standards and the importance of this work to you. You will also hear how you can bring forward your requirements and influence the supporting work activities.
The Open Lifecycle Summit will feature short lightning talks and panel discussions with industry leaders such as OASIS CEO Laurent Liscia, Tasktop CEO Mik Kirsten, Opscode VP of Solutions George Moberly, and IBM Fellows Michael Michael Kaczmarski and Kevin Stoodley, and IBM VP of Standards and IBM Cloud Labs, Dr. Angel Diaz.
The Summit is free to attend for all those attending IBM Innovate. Join us for an exciting session and refreshments to start your attendance at Innovate 2013. For more information and to RSVP visit http://ibm.co/16jTusU
The challenges of
virtualized environments are driving the shift to greater integration of
service management capabilities such as image and patch management, high-scale
provisioning, monitoring, storage and security. Join us for this webcast to learn how
organizations can realize the full benefits of virtualization to reduce
management costs, decrease deployment time, increase visibility into
performance and maximize utilization.
Even though server proliferation can be partially addressed through virtualization, the usage of virtual and physical assets becomes complex to accurately assess or manage. Cost management is crucial to integrate into overall service management, especially with a move into cloud. This webcast discusses how to implement a financial management roadmap and the key requirements for cloud transparency-- the ability to allocate IT costs, usage, and value.
As a result of feedback from SmartCloud Enterprise customers
and business partners, IBM is rolling out new enhancements this week.*
In addition to the availability of IBM SmartCloud
Application Services, IBM’s platform-as-a-service offering, new and enhanced
capabilities for IBM SmartCloud Enterprise include:
Platinum M2 VM sizes, now generally available
Alternate Windows Instance Capture, now generally available
Windows Import/Copy pre-release, available by request
Windows 2012 pre-release, available to all users
Cloud Services Framework enhancements
APIs for guest messaging, new and available for all users
ISO 27001 Certification for all IBM SCE data centers
Object storage with enhanced portal integration with SCE
All the details of each new capability/enhancement can be
found on the SCE portal in the “What’s
New in SmartCloud Enterprise 2.2” document (SCE account sign-in is required
to review the document), but here are a few highlights:
IBM SmartCloud Application Services (SCAS)
IBM’s platform as a service -- IBM SmartCloud Application
Services -- runs on top of and deploys virtual resources to IBM SmartCloud
Enterprise. SmartCloud Application Services delivers a secure, automated,
cloud-based environment that supports the full lifecycle of accelerated
application development, deployment and delivery. SCAS provides an
enterprise-class infrastructure, enhanced security and pay-per-use, and allows
clients to differentiate themselves with built-in flexible options that
configure cloud their way – leading to a competitive advantage.
You can find the SmartCloud Application Services offering on
the “Service Instance” tab within your SmartCloud Enterprise account.
As a direct result of client requests, we are offering
additional flexibility and choice in Windows instance capture. Clients can now use
the “Save private image” function with or without the use of Sysprep, the
Microsoft System Preparation tool.
We invite you to learn more about all of these enhancements
via the documentation library in the SCE portal and welcome your feedback.
Thank you for your continued support!
* IBM will roll out these new
capabilities in waves beginning mid-December 2012. IBM’s platform as a service offering, IBM
SmartCloud Application Services, can be found in the “Service Instance” tab
within your SmartCloud Enterprise account.
DevOps has become something of a buzzword lately but the idea behind it can be truly powerful. Using a combination of technology and best practices to increase collaboration between development and operations teams can accelerate the application development lifecycle while improving software quality and reducing costs.
Here’s how IBM is addressing DevOps, with the launch of SmartCloud Continuous Delivery--an agile, scalable and flexible solution for end-to-end lifecycle management that allows organizations to reduce software delivery cycle times and improve quality. Learn more: http://ibm.co/UeAl0B
The challenges of managing virtualized environments are mounting. The benefits of virtualization—from cost and labor savings to increased efficiency—are being threatened by its staggering growth and the resultant complexity. A critical piece to solving these challenges, as many organizations have already discovered, is image management. Read more: http://ibm.co/SpHTlV
Orchestration can be one of those ambiguous concepts in cloud computing, with varying definitions on when cloud capabilities truly advance into the orchestration realm. Frequently it’s defined simply as automation = orchestration.
But automation is just the starting point for cloud. And as organizations move from managing their virtualized environment, they need to aggregate capabilities for a private cloud to work effectively. The automation of storage, network, performance and provisioning are all aspects handled in most cases by various solutions that have been added on over time as needs increase. Even for organizations that take a transformational approach -- jumping to an advanced cloud to optimize their data centers -- the management of heterogeneous environments with disparate systems can be a challenge not simply addressed by automation alone. As the saying goes, “If you automate a mess, you get an automated mess.”
With the proliferation of cloud computing, many businesses are starting
to adopt a service provider model—either as a deliberate strategy to
establish new revenue streams or, in some cases, inadvertently to
support the growing needs of their organizations. This is especially
true for companies with diverse needs, whether they’re tech companies
with dev teams churning out new apps and services, or business owners
driving requirements for SaaS services and cloud capabilities to enhance
their data center operations.
Computing is a term that is often bandied about the web these days and
often attributed to different things that -- on the surface -- don't
seem to have that much in common. So just what is Cloud Computing? I've
heard it called a service, a platform, and even an operating system.
Some even link it to such concepts as grid computing -- which is a way
of taking many different computers and linking them together to form one
very big computer.
basic definition of cloud computing is the use of the Internet for the
tasks you perform on your computer. The "cloud" represents the Internet.
Cloud Computing is a Service
The simplest thing that a computer does is allow us to store and
retrieve information. We can store our family photographs, our favorite
songs, or even save movies on it. This is also the most basic service
offered by cloud computing.
a great example of cloud computing as a service. While Flickr started
with an emphasis on sharing photos and images, it has emerged as a great
place to store those images. In many ways, it is superior to storing
the images on your computer.
Flickr allows you to easily access your images no matter where you are
or what type of device you are using. While you might upload the photos
of your vacation to Greece from your home computer, you can easily
access them from your laptop while on the road or even from youriPhone while sitting in your local coffee house.
Second, Flickr lets you share the images. There's no need to burn them to a compact disc or save them on a flash drive. You can just send someone your Flickr address.
Flickr provides data security. If you keep your photos on your local
computer, what happens if your hard drive crashes? You'd better hope you
backed them up to a CD or a flash drive! By uploading the images to
Flickr, you are providing yourself with data security by creating a
backup on the web. And while it is always best to keep a local copy --
either on your computer, a compact disc or a flash drive -- the truth is
that you are far more likely to lose the images you store locally than
Flickr is of losing your images.
This is also where grid computing comes
into play. Beyond just being used as a place to store and share
information, cloud computing can be used to manipulate information. For
example, instead of using a local database, businesses could rent CPU
time on a web-based database.
downside? It is not all clear skies and violin music. The major
drawback to using cloud computing as a service is that it requires an
Internet connection. So, while there are many benefits, you'll lose them
off if you are cut off from the Web.
Cloud Computing is a Platform
The web is the operating system of the future. While
not exactly true -- we'll always need a local operating system -- this
popular saying really means that the web is the next great platform.
a platform? It is the basic structure on which applications stand. In
other words, it is what runs our apps. Windows is a platform. The Mac OS
is a platform. But a platform doesn't have to be an operating system.
Java is a platform even though it is not an operating system.
Through cloud computing, the web is becoming a platform. With trends such as Office 2.0,
we are seeing more and more applications that were once the province of
desktop computers being converted into web applications. Word
processors like Buzzword and office suites likeGoogle Docs are
slowly becoming as functional as their desktop counterparts and could
easily replace software such as Microsoft Office in many homes or small
But cloud computing transcends Office 2.0 to deliver applications of all shapes and sizes fromweb mashups to Facebook applications to web-based massively multiplayer online role-playing games.
With new technologies that help web applications store some information
locally -- which allows an online word processor to be used offline as
well -- and a new browser called Chrome to push the envelope, Google is a major player in turning cloud computing into a platform.
Cloud Computing and Interoperability
A major barrier to cloud computing is the interoperability of
applications. While it is possible to insert an Adobe Acrobat file into a
Microsoft Word document, things get a little bit stickier when we talk
about web-based applications.
is where some of the most attractive elements to cloud computing --
storing the information on the web and allowing the web to do most of
the 'computing' -- becomes a barrier to getting things done. While we
might one day be able to insert our Google Docs word processor document
into our Google Docs spreadsheet, things are a little stickier when it
comes to inserting a Buzzword document into our Google Docs spreadsheet.
for a moment that Google probably doesn't want you to have the ability
to insert a competitor's document into their spreadsheet, this creates a
ton of data security issues. So not only would we need a standard for
web 'documents' to become web 'objects' capable of being generically
inserted into any other web document, we'll also need a system to
maintain a certain level of security when it comes to this type of data
Possible? Certainly, but it isn't anything that will happen overnight.
What is Cloud Computing?
brings us back to the initial question. What is cloud computing? It is
the process of taking the services and tasks performed by our computers
and bringing them to the web.
What does this mean to us?
With the "cloud" doing most of the work, this frees us up to access the
"cloud" however we choose. It could be a super-charged desktop PC
designed for high-end gaming, or a "thin client" laptop running the
Linux operating system with an 8 gig flash drive instead of a
conventional hard drive, or even an iPhone or a Blackberry.
can also get at the same information and perform the same tasks whether
we are at work, at home, or even a friend's house. Not that you would
want to take a break between rounds of Texas Hold'em to do some work for the office -- but the prospect of being able to do it is pretty cool.
Now 400 millions research papers are available for peace solution,but there is no result for the same,unless the messages posted in the website http://www.goldenduas.com are researched by all the researchers in the world.Otherwise the world cannot peace and unity for the following reasons.
Thank you very much joining with me in the interest of public,Safety and peace in the world.Most of my friends and followers are youngsters and good educated persons involving peace,Unity and safety amongst all communities in the world and accordingly we sought support from all of you to study and analyse the God's messages posted in the website www.goldenduas.com and same may be advertised all over the world on the reasons that every person are suffering,due to all kind of naturalcalamaties in the world.Unless God's messages posted in the website www.goldenduas.com are followed,otherwise No government and Scientist can safeguard life and liberity of the public of the all communities in the world according to Quranic verses 17:16 and 28:59.Internet services in the world and requesting support us to spread our website messages to each and every corner of the world to know and discuss by all the internet communities in the world. Holy Bible says: 1."Behold, I send you forth as sheep in the midst of wolves: be ye therefore wise as serpents, and harmless as doves". - Matthew 10:16. 2."Be strong, do not fear; your God will come, he will come with vengeance; with divine retribution he will come to save you". - Isaiah 35:4 Holy Quran says: 28:59. Nor was thy Lord the one To destroy a population until He had sent to its Centre An apostle, rehearsing to them Our Signs; nor are We Going to destroy a population Except when its members Practise iniquity. Our website http:www.goldenduas.com contains more information not only to avoid all kinds of natural calamities in the world but also to12:15 improve economic growths in business, education, employment, jobs, health, wealth, security, faith, climate changes (heavy snow,rain,heat etc),and causes unity and peace all over the world.Our service all over the world is a non-profitable service to all mankind and animals.
Please check our homepage of the website to know our services. Otherwise, the public of the world will suffer due to all kind of natural calamities till the day of resurrection and also they will fail to improve in economy in businesses,unity,peace,education,health,wealth,security,faith and also climate changes.
Organizations looking to optimize across the application lifecycle recognize the need for enhanced innovation and speed to market. Yet most IT resources are focused on covering the basics, leaving fewer resources to support business agility. The solution: Platform as a Service (PaaS).
IBM’s PaaS solution, IBM SmartCloud Application Services, or SCAS, allows clients to differentiate themselves with built-in flexible services that allow them build and customize cloud solutions their way – leading to a competitive advantage. Companies are using enterprise-class IBM Application Services to measure and respond to market demands, capture new markets, and reduce application delivery and management costs.
What are the benefits of a PaaS solution?
First, with IBM Collaborative Lifecycle Management Service, included within SCAS, development teams can establish shared team development environments in minutes – before it used to take weeks. Within hours they can quickly define their development team and begin working collaboratively to respond to business needs.
Another significant benefit of a PaaS approach is the time it takes to get an application deployed and to market. Application deployment can take weeks on a traditional environment but with IBM SmartCloud Application Services, applications can be deployed to the cloud in minutes.
SCAS also allows clients to respond rapidly to changing market conditions by deploying or modifying cloud-centric (“born on the cloud”) or cloud-enabled (legacy applications) quickly and easily. In fact, developers can move from the dev/test environment directly into production with SCAS, taking advantage of proven repeatable patterns contained within the SmartCloud Application Workload Service, thus eliminating human error. These repeatable patterns allow clients to eradicate errors by avoiding manual processes – this drives consistent results, increases productivity, and reduces risk.
IBM SmartCloud Application Services are compatible with the newly announced IBM PureSystems family. For example, through SmartCloud Application Services clients can rapidly design, develop, and test their dynamic applications on IBM's public cloud and deploy those same application patterns on a private cloud built with PureApplication Systems, or vice versa.
Want to try IBM’s PaaS . . . for free*? IBM SmartCloud Application Services is now in pilot and accepting new client who want to get ready to accelerate their cloud initiatives. Clients won’t pay for SCAS services during the pilot, but will only be charged for the underlying *SmartCloud Enterprise infrastructure used by the services (that’s because SCAS runs on top of IBM’s Infrastructure as a Service offering, SmartCloud Enterprise, or SCE). Existing SCE customers can get up and running on the pilot quickly and start realizing the benefits of PaaS right away.
To be considered for the program, new or existing SCE customers should IBM SmartCloud Application Services web site and click the button on the right titled, “Get a jump on the competition with the SmartCloud Application Services pilot program.”
Who is using IBM SmartCloud Application Services? CLD Partners, a leading provider of IT consulting services with a particular focus on cloud computing, began using SCAS during the beta which launched in 2011 and has now transitioned into the pilot program.
“We share IBM’s vision for how enterprise customers can achieve huge productivity gains by embracing cloud technologies. SCAS allowed us to utilize world class software in a managed environment that greatly reduced the complexity of the deployment while also providing for future scalability that our customers only pay for when they need it,” said Steve Clune, Founder and CEO of CLD Partners. “Ultimately, traditional infrastructure planning and configuration that would have required weeks was literally reduced to hours. And future flexibility as infrastructure needs change is virtually limitless.”
Who would be interested in the SmartCloud Application Services pilot program? IT Operations, Independent Software Vendors (ISVs), Line of Business, and Application Developers would benefit from the SCAS pilot program. And it doesn’t matter the company size, enterprise or mid-market; all types of businesses can realize value from getting their applications to market faster.
One of the exciting and valuable characteristics of IBM SmartCloud Enterprise is it's tight linkage with the IBM Software Group portfolio of offerings. In addition to the offerings from IBM Software Group, innovative software vendors are making exciting offerings available as well. There is an ever-growing list of offerings available to IBM SmartCloud Enterprise customers. These recent additions are now in the SmartCloud Enterprise public catalog and available to you to use.
BYOL - Bring Your Own License; PAYG - Pay As You Go
IBM Business Process Manager is a comprehensive BPM platform giving you visibility and insight to manage business processes. It scales smoothly and easily from an initial project to a full enterprise-wide program. IBM Business Process Manager harnesses complexity in a simple environment to break down silos and better meet customer needs.
The following BPM images are now available in the catalog:
IBM Process Center Advanced 7.5.1 64b - BYOL IBM Process Center Standard 7.5.1 64b - BYOL IBM Integration Designer 7.5.1 64b - BYOL IBM Process Server Advanced 7.5.1 64b - BYOL IBM Process Server Standard 7.5.1 64b - BYOL IBM Process Designer 7.5.1 64b - BYOL, PAYG IBM BPM Express 7.5.1 64b - BYOL, PAYG
IBM WebSphere Service Registry and Repository (WSRR) is a system for storing, accessing and managing information, commonly referred as service metadata, used in the selection, invocation, management, governance and reuse of services in a successful Service Oriented Architecture (SOA). In other words, it is where you store information about services in your systems, or in other organizations' systems, that you already use, plan to use, or want to be aware of.
The following WSRR images are now available in the catalog:
IBM WebSphere Service Registry 64bit BYOL IBM Image IBM WebSphere Service Registry 22.214.171.124 64bit BYOL
IBM WebSphere Message Broker (WMB) delivers an advanced Enterprise Service Bus (ESB) that provides connectivity and universal data transformation for both standard and non-standards-based applications and services to power your SOA.
The following WMB images are now available in the catalog:
IBM WebSphere Message Broker 126.96.36.199 64b BYOL
IBM SPSS Decision Management enables business users to automatically deliver high-volume, optimized decisions at the point of impact to achieve superior results.
The following SPSS image is now available in the catalog
IBM SPSS Decision Management 6.2 64b BYOL
From our partner Riverbed comes Riverbed® Stingray™. This software-based application delivery controller (ADC) designed to deliver faster and more reliable access to public web sites and private applications.
The following Riverbed Stingray images are now available in the catalog:
Riverbed Stingray V 8.0 RHEL 6 32 bit BYOL Riverbed Stingray V 8.0 RHEL 6 64 bit BYOL Riverbed Stingray V 8.0 SLES 11 SP1 32 bit BYOL Riverbed Stingray V 8.0 SLES 11 SP1 64 bit BYOL
Additionally, Alphinat SmartGuide provides visual, drag and drop tools that can help you quickly build interactive web dialogues that guide people to the relevant response, help them diagnose problems or lead them through a series of well-defined steps that make it easy to complete complex—or infrequently performed—tasks.
The following Alphinat SmartGuide images are now available in the catalog:
GridRobotics' Cloud Lab Grid Automation Server can manage any number of client or agent computers, which can be spun up automatically on public clouds like IBM SCE or private clouds. Grid Robotics’ Cloud Lab Classroom is a virtual classroom management solution.
The following GridRobotics Cloud Lab images are now available in the catalog:
GridRobotics Cloud Lab Grid Automation Base Server 1.4 32b R2 - BYOL GridRobotics Cloud Lab Classroom Base Server 1.4 32b R2 - BYOL
GridRobotics Cloud Lab Base Agent V 1.4 32b R2 - BYOL
computing tests the limit of security operations and infrastructure from
various perspectives. Let us examine what
is different about Cloud Security and identify what are existing threats and what
are the new areas that we should be concerned about.
Figure 2 Cloud Security - Existing & New Threats
I think what make cloud security complex is the number of
layers involved in the cloud service stack and the number of components in each
layers. So it means
·Increased infrastructure layers to
manage and protect
·Multiple operating systems and
applications per server
More Components = More Exposure
As we can see we already do perimeter protection at the
network and operating systems as well as do physical and personnel security for
the traditional infrastructure. All of them holds good for cloud as well to combat
the existing threats at these layers.
us examine what are the new points of exposure with cloud. Security and resiliency complexities are raised
by virtualization and automation which are essentials to cloud. The new risks
·Cloud Service Management Vulnerabilities
·Secure storage of VMs and the
·Managing identities on the
increasing number of virtual assets
·Stealth rootkits in hardware now possible
·Virtual NICs & Virtual Hardware
·Virtual sprawl, VM stealing
·Dynamic relocation of VMs
·Elimination of physical boundaries
·Manually tracking software and
configurations of VMs
managing these additional complexities, you need a reference model that is
comprehensive and covers security controls that can combat not only the
existing challenges but also the new challenges that cloud brings in.
Foundational Security controls for IBM cloud reference model (see below)
provides the different elements and controls required to build a secure cloud.
Figure 1 Foundation Security Controls for IBM Cloud
Managing datacenter identities (Identity and access
Management) is one of the top-most security concerns and we discussed how to
handle the same in my previous
post. I’ll discuss how to handle the
virtualization related threats in my next post.
Meanwhile let me know your comments on this reference model.
Do you think these set of controls are comprehensive. Do you see any areas not
covered from a cloud security perspective? If so, just add it as comment to
this post and let us discuss.
Join us for the 2012 IBMSmartCloud
Symposium event on 16-19 April 2012 in San Francisco, California. This
Symposium will help you Rethink IT and Reinvent Business.
event will introduce Cloud Computing’s disruptive potential to not only
reduce cost and complexity but reinvent the way we do business. Over the
course of four days, there will be sessions that define cloud computing
and discuss transformative benefits and challenges to consider while
sharing specific, proven patterns of success. We will provide proven
methods to get started on the Cloud journey from the up-front
investments to capacity planning. This event will cover the technology
behind private and public clouds whether you choose to build your own,
leverage prepackaged solutions or have it delivered as a service.
will explore challenges and solutions for securing, virtualization and
performance of mission critical applications as well as automating
service delivery processes for cloud environments. We will help you:
design, deploy and consume.
challenges for cloud , I discussed Security as the top concern. I also
detailed the top concerns with regard to securing the cloud in the subsequent post.
Cloud computing tests the limits of security operations and infrastructure for
the various security and privacy domains
Cloud brings in lot of additional considerations like
multi-tenancy, data separation, virtualization etc. In a cloud environment,
access expands, responsibilities change, control shifts, and the speed of
provisioning resources and applications increases - greatly affecting all
aspects of IT security.We will discuss
the different security aspects classifying them against specific adoption
patterns (see post here).
The cloud enabled data center pattern is the more predominant one which has Infrastructure
and Identity management as the top concerns.Within cloud security doing the right design
for the infrastructure security is the important aspect – the details of which
and how it is done by different public clouds we discussed in the previous post.
Now with regard to Identity lets discuss the top requirements, use cases and
look at what solutions that we can provide to make the cloud secure. Lets start
with managing datacenter identities which is the top concern.
Managing Datacenter Identities
Identity and Access Control needs to deliver capability that
can be used to provide role based access to securely connect users to the cloud.
The users include the cloud service provider as well as consumer roles. Within
each user groups we need to support User as well as Administrator Roles. The
identity and access management should the 4As - Authentication, Authorization,
Auditing and Assurance.
§For a cloud consumer user, it is
about making sure the user identity is verified and authenticated at the self
service portal and providing right access to the resource pools.
§For the administrator, we need to
provide role based access to Service Lifecycle Management functions
§We will need to integrate with
existing User Directory infrastructure (AD/LDAP/NIS) to extend the user
identity to the cloud environment as well.
§Once in the cloud environment, we
need to automatically manage access to the cloud resources, through provisioning
and de-provision of resource profiles and users against the resources in the cloud
identity and access management systems. Manual processes to manage accounts for
users on various virtual systems and applications are not going to scale in a
cloud environment. The same is true with the manual processes to process
various audit logs to meet compliance and audit requirements
§In massively parallel,
cloud-computing infrastructures involves enormous pools of external users as
well. We need to ensure smooth user experience for the users so that they don’t
need to enter their credentials multiple times to access various applications
hosted within the enterprise or by business partners and Cloud providers.
§Management of user identities and
access rights across hosted, private and hybrid clouds for internal Enterpise
users is also a major challenge that includes
oCentralized user access management to on and off-premise applications
oEnables Federated Single Sign-on and Identity Mediation across
different service providers
Lets look at some of the capabilities that we can leverage
to solution these requiremnts.
IBM Security Identity and Access Assurance - provides
the following capabilities.These
capabilities enable clients to reduce costs, improve user productivity,
strengthen access control, and support compliance initiatives.
and policy-based user management solution that helps effectively manage
Enterprise, Web, and
federated single sign on, inside, outside, and between organizations,
including cloud deployments.
and access support for files, operating platforms, Web, social networks,
and cloud-based applications.
with stronger forms of authentication (smart cards, tokens, one-time
passwords, and so on).
monitoring, investigating, and reporting on user activity across the
Tivoli Identity Manager complements its role management
capabilities with role mining and lifecycle management, provided by the
IBM Security Role and Policy Modeler component, which helps reduce time
and effort to design an enterprise role and access structure, and
automates the process to validate the access information and role
structure with the business.
Security Access Manager for Enterprise Single Sign-On offers wide
platform coverage, strong authentication enhancements, and simpler
deployments.It introduces 64-bit
operating system and application support, a virtual appliance for easier
installation and configuration of the server, expanded support for smart
cards, and simplified profiling.
Tivoli Federated Identity Manager offers additional Open Authorization
(OAuth) authorization standards support, (for business to consumer
deployments and utilization of cloud-based applications and identities),
enhanced security for Secure Hash Algorithm (SHA-2), usability
enhancements, and new Business Gateway capabilities.
As we discussed in my previous post, transparency or more
control is need of the hour with regards to security on the cloud.Let examine how this is done by the popular
cloud providers and understand the method and the technologies. We need to
secure the infrastructure, network, endpoints, applications, processes, data,
and information and overall have a governance to mitigate the risk and meet the
compliance. Let us take the infrastructure to begin with.
The key areas for a security team to design for with regards
to infrastructure security are
logs on all resources – VMs and hypervisors
Let us start looking at the public cloud implementations to
understand how they are managing these aspects.
Almost all the vendors – IBM, Amazon,
provide a means to do SSH with keys to the Guest OS. The protocol runs over SSL
and is authenticated with a certificate and private key which could be
generated by the customer.
SmartCloud is designed with enterprise security as a top priority. Access
to the infrastructure self-service portal and application programming interface
(API) is restricted to users with an IBM Web Identity. The infrastructure
complies with IBM security policies, including regular security scans and controlled
administrative actions and operations. Within our delivery centres, customer
data and virtual machines are kept in the data centre where provisioned, and
the physical security is the same as that for IBM’s own internal data centres.With virtual private network (VPN) option,
customers can isolate their servers in the IBM SmartCloud on a virtual local
area network (VLAN) that can act as an extension of their internal network.
This VPN capability can also be used to create security zones in an Internet-facing
configuration to better protect their servers against attacks.
roles across LotusLive and their access authorizations are recorded in a
Separation of Duty matrix.
security-rich infrastructure: Security configuration reviews
and periodic vulnerability scanning of all systems and infrastructure.
enforcement points providing application security: multi-layered
compliance with periodic programs that address all elements of the service
We will see how the infrastructure
security aspects are dealt with for private clouds in my next post. Stay tuned
and keep those comments coming. I’d some of my readers tell me that the blog
entries are not showing up fine on Internet explorer. While I will make the
effort to fix the issue, please use Firefox or any other browser in the
And if you these posts interesting dont forget to rate the post (click on the stars) and if you got an extra minute do put in a comment on what apsects you find interesting or need discussion.
IT Security is well researched and
matured area. The reason why we have enterprises doing commerce over the web
today is because IT Security practices, tools and technologies have matured to
establish the trustand have overcome the
concerns. As with most new technology paradigms, security concerns surrounding
cloud computing have become the most widely talked about inhibitor of
widespread usage as discussed in my previous post.
To gain the trust of organizations,
cloud services must deliver security and privacy expectations that meet or
exceed what is available in traditional IT environments. Let us discuss what’s are
the Top Security Concerns when it comes to cloud.
Transparency or Less Control
If we look at the security and
privacy domains in cloud, they are no different from the traditional domains.
We need to secure the infrastructure, network, endpoints, applications,
processes, data, and information and overall have a governance to mitigate the
risk and meet the compliance. But in a cloud environment, access expands,
responsibilities change, control shifts, and the speed of provisioning
resources and applications increases - greatly affecting all these aspects of
IT security. The different cloud deployment models like the public, private and
hybrid clouds also change the way we think need to about security. The
responsibilities are spread across Consumer, Service Resellers and Providers.
The immediate risks of these shared responsibility is that nobody gets a
holistic view of the security and so less customization of any security
controls. Consumers need visibility into day-to-day operations as well as need
access to logs and policies. The aspect of less visibility or transparency is
mostly the top most concern shared universally.
Data and Information Security
The next primary concern that
customers mention related to security on the cloud is related to data and
information security. The specific concerns include
§Protection of intellectual property and data
§Ability to enforce regulatory or contractual obligations
§Unauthorized use of data
§Confidentiality of data
§Availability of data
§Integrity of data
A shared, multi-tenant
infrastructure increases potential for unauthorized exposure especially in the
case of public-facing clouds. Security Administrators need to worry about
designing security for applications and data that are publically exposed which
can be potentially accessed by anybody on the internet.
Different industries and geographies have different regulations
and rules that they need to comply to depending on the workloads and data they
put on the cloud. Complying with SOX,
HIPAA and other regulations are one risk or issue because of which customers
are not ready to put their applications on the cloud. Cloud or no cloud for
these sort of workloads comprehensive auditing capabilities are essential.
Security Management - Methods and Tools
Finally customers would need to know how today’s enterprise
security controls are represented in the cloud.They need to understand how the security events are monitored correlated
and actions taken when needed to keep their infrastructure, workload and data
safe. Security coming on the way of high availability is another key
concern.IT departments worry about a
loss of service should outages occur because of security reasons. If so, when
running mission critical applications how soon you can get the environment back
at the same level of security is the priority.
Until all of these concerns are addressed and without strong
availability guarantees, customers may not be ready to run their apps in the
cloud. But things are not that bad as we might think. We will discuss how these
aspects can be addressed and what tools and technologies to put to use in the
Cloud Security – The top most concern and Opportunity
First of all, wishing all my readers a
very happy and prosperous year 2012 ahead.
Few things happened towards the end
of the year which was significant to me. IBM acquired Q1 Labs to Drive Greater Security Intelligence and created a New Security Division. I also joined this
newly formed IBM Security Systems team last quarter as a solution architect for cloud security. This is a great time to be looking at cloud security. Happy to be on this new role where I can provide solution to customers to handle their cloud security concerns and make it easy for them to adopt cloud and innovate at a faster rate than before.
In my previous
post, we discussed security as the top most concern why customers and
enterprises are not adopting cloud.As
part of year’s posts, I plan to discuss the various security issues and aspects
of cloud computing.
We will explore to understand what are
the unique challenges with Cloud Security and discuss what aspects is important
for each customer
adoption pattern that we have seen.
We will also learn how the IBM Security
Framework can be used to address the various security challenges namely
forward to your comments and inputs in this journey of understanding the
security requirements for cloud and how we can overcome this major challenge to
cloud adoption using the World’s Most Comprehensive Security Portfolio – IBM
Security Systems. I’ll
try and elaborate the IBM Point of View on cloud security and discuss the architectural
model to address the security requirements for cloud. Stay tuned and keep those comments and inputs coming.
With the barrage of cloud news constantly hitting the market, it can be challenging for organizations to differentiate between all of the solutions and capabilities out there.
But with the latest cloud offering from IBM, the value proposition is quite simple—you get a low-cost, low-risk entry to cloud computing with compelling features. This is especially important for organizations who are still trying to leverage the cost savings of virtualization.
Our customers have told us they’re looking to cloud computing to increase agility—the ability of IT to evolve and meet business needs—and they’re looking for ways to control expenses related to IT investments. They also want to reduce IT complexity while at the same time increase utilization, reliability and scalability of IT resources. And they are looking for the ability toexpand capabilities gradually, as their needs change and grow.
In designing a solution to meet all of these needs, we developed IBM SmartCloud Provisioning. Using industry best practices for cloud deployment and management, this new solution allows organizations to quickly deploy cloud resources with automated provisioning, parallel scalability and integrated fault tolerance to increase operational efficiency and respond to user needs.
The name doesn’t tell the whole story though. IBM SmartCloud Provisioning is a full-featured solution wrapped up in an easy-to-implement package. That means you get:
·Rapidly scalable deployment designedto meet business growth
·Reliable, non-stop cloud capable of automatically tolerating and recovering from software and hardware failures
·Reduced complexity through ease of use and improve time to value
·Reduced IT labor resources with self-service requesting and highly automated operations
·Control over image sprawl and reduced business risk through rich analytics, image versioning and federated image library features
Using this technology, we’ve seen customers get a cloud up and running in just hours—realizing immediate time to value. It’s fast—administrators have been able to go from bare metal to ready-for-work in under five minutes, or start a single VM and load OS in under 10 seconds, or scaleup to 50,000 VMs in an hour (50 nodes).
But ultimately, these IT benefits have translated to business benefits—customers have been able to see how cloud computing can impact their business, and how they can accelerate the delivery of new services to drive revenue.
With the new release of IBM SmartCloud Provisioning this week, you can try and see firsthandthe potential of this breakthrough technology to accelerate your journey to cloud.
And if you want a preview of what’s in development, you can join our Open Beta program for access to beta-level code.
While I’m writing this blog, the Ministers of Tamil Nadu and
Kerala are having a meeting
with Prime Minister to discuss the contentious issue of Mullaperiayar at length.
For those who don’t know about this issue, this is about the Mullaperiayar Dam in
Mullaperiyar Dam is a masonry gravity dam over River Periyar and operated
bythe Government of Tamil Nadu based on
a 999-year lease agreement. The catchment areas and river basin of River
Periyar downstream include five Districts of Central Kerala, namely Idukki,
Kottayam, Ernakulam, Alappuzha and Trissur with a total population of around
This dam is at the centre stage again in the wake of reports that
the dam is weakening due to increase in incidents of tremor in Idduki district
in Kerala. Ministers from Kerala are seeking Central Government intervention in
ensuring the safety of the dam. At the same time, Tamil Nadu is insisting on
increasing the water level in the reservoir for enhancing water supply to the
state. While Tamil Nadu wants to increase the water-level in the reservoir,
Kerala has been insisting that it be reduced from the current 136 feet to 120
Currently I don’t think we have clear metrics on the exact usage
of water by each state, what is right level of water to be retained by the dam,
what are the risks etc. We have been relying on data that we have from the
However you look at it -- whether too much or not enough,
the world needs a smarter way to think about water. We need to look at the
subject holistically with all the other considerations as well. We use water
for more than drinking. We need to make an inventory of how much water we get
and how is it used – of industries, irrigation, etc.
This is where I think we need smarter ways to manage the water in the best possible way that addresses both states
Smarter Water Management can help us think in a smarter way about water. For
instance IBM is helping
the Beacon Institute to do source-to-sea real-time monitoring network for New York’s Hudson
and St. Lawrence Rivers as well as report on conditions and threats in real
time. There are many other case studies across the globe on IBM Smarter Water
Those interested in the problem and the possible solutions should
definitely read IBM’s broader outlook on Water Management as covered in the Global Innovation Outlook.
for Tomorrow is another interesting partnership between IBM and The Nature
Conservancy. IBM is providing a state-of-the-art support system for a free,
online application that will provide easy access to data and computer models to
help watershed managers assess how land use affects water quality.
Though it's a worldwide entity, water is treated as a regional
issue. I think we should try putting technology to use to solve our water problems.
The solution should be more instrumented, interconnected and intelligent system
that can not only take into consideration the realtime monitoring of the river
but also include early warning systems to notify risks related to earth quakes
etc. IBM’s Strategic
Water Management Solutions include offerings to help governments, water
utilities, and companies monitor and manage water more effectively. The IBM
Strategic Water Information Management (SWIM) solutions platform is both an
information architecture and an intelligent infrastructure that enables
continuous automated sensing, monitoring, and decision support for water
you might be wondering what has this to do with Cloud and why is this post on
cloud computing Central. For these solutions and platforms to be successful it
is highly important that we have energy efficient high-performance computing
platforms and complex sensor, metering, and actuator networks. Such platform
needs and flexible choices of having the solution on-premise as well as
leverage different delivery models can only be supported through a cloud.
I think we should just leverage these solutions on the cloud to
solve this issue and keep all the states and its people happy :-).
In my previous post, we looked at understanding the
different adoption patterns – i.e. how customers are turning towards
cloud.Some of the key reasons of the
“why” are listed below
Ease of deployment
More flexibility in
supporting evolving business needs (both from a technical and business
Lower cost of
Easier way to scale
and ensure availability and performance
Overall ease of use
While all of these are good, there are
still many yet to get on to this cloud computing train. Let’s explore what are
their key concerns or challenges why they are reluctant to jump in. The
following are inputs that I’ve got from various analyst studies and resources
on the internet.
Securityand Privacy- The top most concern that everybody seem to agree
as a challenge with cloud is security. The data security and privacy
concerns ranks top on almost all of the surveys. Cloud computing
introduces another level of risk because essential services are often
outsourced to a third party, making it harder to maintain data integrity
and privacy, support data and service availability, and demonstrate compliance.
Real Benefits / Business Outcome – Though we have several case studies showcasing
the benefits arising out of implementing cloud technologies, some of the
customers are still not convinced on the possible benefits. Their main
concern is how to realize the investment to full potential and make cloud
part of their mainstream IT Portfolio.Enterprises
need to a good view into the real benefits of cloud computing rather than
the seeing the potential of cloud computing to add value. The return on
investment (ROI) on cloud needs to be substantiated by comparing specific
metrics of traditional IT with Cloud Computing solutions that can show
savings that demonstrate cost, time, quality, compliance, revenue and
profitability improvement. The cloud ROI model should include things such
as indicators for comparing the availability, performance versus recovery
SLA, Workload-wise assessments, Capex versus Opex costs benefits,
Service Quality: Service quality is one of the biggest factors that the enterprises
cite as a reason for not moving their business applications to cloud. They
feel that the SLAs provided by the cloud providers today are not
sufficient to guarantee the requirements for running a production
applications on cloud especially related to the availability, performance
and scalability.In most cases,
enterprises get refunded for the amount of time the service was down but
most of the current SLAs down cover business loss. Without proper service
quality guarantee enterprises are not going to host their business
critical infrastructure in the cloud.
Performance / Insufficient responsiveness over
network: Delivery of
complex services through the network is clearly impossible if the network
bandwidth is not adequate.Many of
the businesses are waiting for improved bandwidth and lower costs before
they consider moving into the cloud.Many cloud applications are still too bandwidth intensive.
Integration: Many applications have complex integration needs to connect to other
cloud applications as well as other on-premise applications.These include integrating existing cloud
applications with existing enterprise applications and data structures.
There is a need to connect the cloud application with the rest of the
enterprise in a simple, quick and cost effective way.
I plan to discuss more on what are the
perceived and real threats related to Security and Privacy in my subsequent
posts. In my new role, as an Architect for IBM Security Solutions,
I’ll like to discuss the details on what IBM tools and technologies you could use to overcome the issues.
Meanwhile keep those comments coming and I look
forward to them to understand what other areas you think are key
concerns to be addressed to accelerate adoption of cloud.
The IBM Tech Trends report is out! We asked, you answered. Check out the results of IBM developerWorks' 2011 Tech Trends survey and find out what more than 4,000 IT professionals -- your peers -- have to say about the future of technology, including their opinions on cloud computing, business analytics, mobile computing, and social business.
The report provides insight from the worldwide IT development community into the adoption, preferences and challenges of key enterprise technology trends including cloud, business analytics, mobile computing, and social business. The results also provide guidance on areas where IT professionals like you say they need help with skills to develop new technologies and platforms that will be in demand in the coming years.
As we focus in on cloud, there is absolutely a growing trend in cloud computing to view it as more than just cheap infrastructure. Companies are now exploring the possibility of developing applications in the cloud (you guys are already doing that) many of them related to mobile development.
Currently the biggest challenge is integrating the cloud into application development as the reduction of operating expenses is the driver of this move. We still have a way to go however with 40% of the survey responders saying their company is not yet involved in cloud currently. Hmm, interesting right.
The cool news is that the expectation from those same responders is that over the next two years 75% of the IT professionals responded that they expect that this will change and that theirs and other enterprises will take to building cloud infrastructure.
I did discuss the - The Next Big thing – Cloud enabled
business model Innovation in my previous post. But you may be asking where do I
start.That’s where I guess Cloud
Adoption Patterns work that IBM has pioneered is going to help. This is some
great analysis - Cloud Adoption
Patterns that IBM have done based on thousands of cloud engagements that we
have done so far. This analysis is a good abstraction of the ways organizations
are consuming cloud -- a good starting /entry point discussions on cloud.
The four most common entry points to cloud solutions are discussed in the
picture above. I love these videos on youtube - Cloud Adoption
Patterns that tells you the essence of these patterns in less than 2 minutes.
Data Center – to achieve better return on investment and manage
complexity by extending virtualization well beyond just hardware consolidation.
Solutions on Cloud – to access enterprise-level capabilities through a
provider’s applications running on a cloud infrastructure; to improve
innovation and flexibility while minimizing risk and capital expense.
Service Provider – to innovate with new business models by building,
extending, enabling and marketing cloud services.
For each of these patterns of cloud adoption, we have defined a set of
proven projects that it supports with software, services and solutions to help
businesses streamline the implementation of their chosen cloud capabilities.
While the Cloud
Enabled Data Center pattern is the case for most of the private cloud
implementation. Most customers start with providing infrastructure as a service
on the cloud. This pattern also discusses how we can share infrastructure
across multiple projects and drive benefits.This also discusses a lot of automation in the operation and business
process that’s possible to have a responsive IT department that can help the
business to be agile.
The next level of gain or reuse would be run your workloads on a shared
stack of middleware.Platform
as a Service Pattern is an integrated stack of middleware that is optimized
to execute and manage different workloads, for example, batch, business process
management and analytics. This middleware stack standardizes and automates a
common set of topologies and workloads, providing businesses with elasticity,
efficiency and automated workload management. A cloud platform dynamically
adjusts workload and infrastructure characteristics to meet business priorities
and service level agreements. All the layers below understanding what workloads
are running on top of it and optimizing self is going to help run these
workloads more efficiently and at a lower cost.The Cloud Platform Services adoption pattern can improve developer
productivity by eliminating the need to work at the image level so that
developers can instead concentrate on application development.
solutions pattern maps to the SAAS model where you leverage cloud toinnovate with speed and efficiency to drive
sales and profitability. In these we
look at creating and consuming business solutions on the cloud. Some of the key
offerings in this space are things like business process design, social and
collaboration tools, supply chain and inventory, digital marketing
optimization, B2B integration Services etc. These generic services consumed
from the cloud relieves you of the pain of setting up things from scratch as
well as enable you to scale based on your demands.
Cloud Service Provider (CSP) Pattern is the one that most of the Telcos
adopt when they have to service multiple consumers with a single cloud
solution. We provide tools and technologies to design and deploy highly secure,
multi-tenant cloud services infrastructure that can integrate nicely with
plenty of 3rd party applications.
As we understand it is easy to do the IaaS pattern and more
work to do when we implement SaaS or CSP patterns. But the gain is more when we
do sharing at the software or application level. Depending on where you are in
your current IT Environment, you can pick up and implement any of these
patterns that suit you. The work that we have done to analyse these patterns
and provide a consistent set of technologies and tools to build out these
patterns should make life easy for you. Leverage it –less pain and more to gain.
There's still time to sign up for the IBM webcast: Managing the Cloud – Best practices for cloud service management
Organizations today are looking to cloud computing to deliver cost savings and faster service delivery. However, most organizations are still struggling to have the basic IT infrastructure that is necessary to take the leap to a robust cloud. This session will explain how service management can help provide the essentials to maintain service levels in the cloud and best practices based on IBM's work with customers. This information will provide the foundation for building and managing a cloud to meet your business objectives and transform IT.
The Next Big thing – Cloud enabled business model Innovation
I remember the day when one of our Executives - Nick
Donofrio visited us in India.
He is like the chief mentor for all the members of the IBM technical community
and he has seen IBM and the IT industry for many years. He was addressing a
Technical Exchange event few years ago and then someone in the audience asked
him this question – “Sir , you have seen technology for so many years now – can
you tell us what’s going to be the next big thing in terms of
invention/innovation”. Everyone was all ears waiting for the answer - is it the
next version of the internet, the search, a web2.0 application or may be an
intelligent mobile app. But his answer was that he believes that there is not
going to be any next big thing in technology. The next big thing for all of us
is going to be Business model innovation. Even today his statements holds very
true. Businesses that are able to reinvent their business model are succeeding
and managing to stay on top and others vanish from the scene.
There are lots of innovative and technical things happening all
around us like
and doing more and more using mobile devices
Media – thinning the line between work and life and business having reach
to your social network
Data and its related analytics giving the business insights that were not
possible few years ago.
I believe the next big thing is going to be how well you can
use all these elements for business seamlessly and cost effectively. The key to
succeed is to use technology to do this business model innovation and do it
How do you do it faster ?-- The answer is cloud.This is something that I’m saying based on
the data that IBM has got analyzing over 2000 customers cloud adoption
patterns. All of them have seen the below advantages with Cloud.
Considering all these factors, I think the next big thing is
Cloud Enabled Business Model Innovation. I was able to relate with some of the
latest announcements that we have made in the cloud easily because they are
just restating my same belief.As
discussed in this interesting
video by IBM's Saul Berman (Innovation & Growth Leader), 60% of the
customers that IBM interviewed is saying they would consider cloud immediately
and 70% of the them intend to use cloud to enable business model innovation. Based
on the rate at which they adopt the new technologies they may be an Optimizer (looking
at improving existing model), Innovator (looking at new model) or a Disruptor (who
is ready to bring in game changing ideas).
So as today’s IT leaders, let us broaden our focus from merely
delivering technology to solving larger business issues. One great opportunity for that is to tune in
or be present for the SWG
Universe India 2011.You will get a
chance to listen to some great speakers who will talk about how to use cloud
for business model innovation.
Cloud enabled Business Model Innovation I feel is the next
big thing that could change IT and Businesses. – So come let’s Rethink IT & Reinvent Business
In order for me to be responsive to your reading interests and learning needs, I thought I'll take a short feedback that will help me understand your reactions to my blog. Request your response by taking this short survey. This should not take more than two minutes 30 seconds of your time. It is primarily for me to improve the focus on my blog. Please note that there is nothing official about this survey and all responses are anonymous within the system.
You can see all the blog entries in this category by clicking on the tag "stepbystep" If you liked any entry in the blog, please rate it by clicking on the "star"
or feel free to provide your comments and inputs through this feedback form.
You can access the feedback form here. Look forward to your comments and inputs.
I've been writing about the step by step approach to Cloud
till now. The rate at which I see cloud computing being adopted inside and
outside the Enterprise, I think we really need to get out of our step-by-step
approach and start riding the wave. IBM has implemented may be over 2000
cloud engagements in the last year and are managing over 1 million virtual
machines today.We have identified the
customer cloud adoption patterns and entry points to cloud and have lots of
lessons learnt and experience to share.So won’t it be nice if we could talk to you about the things as well as
share the best practices with you.All
of it is difficult to discuss through a blog. So You have a better option – The IBM Software Universe 2011 – The Next Big Wave.
Yes, the 7th edition of IBM India’s largest annual software
conclave is happening this year Oct 19th and Oct 20th.I believe it would be time well spent to
learn from our learnings and accelerate your adoption of cloud.We have some interesting sessions on Private Cloud [R]Evolution which will
discuss some of the key trends and technologies to look at for building the
cloud insider your firewall. If you are looking to understand how to expand
your existing Data Center capabilities to have better visibility, control and
automation across your physical and virtual environments then “Integrated Service Management – Thinking
Beyond the Data Center” is a must attend session.If you are one of those business or
Enterprise IT Manager who is looking to start with the cloud – you don’t want
to miss the “Get Your Head in the Cloud” session which can tell you how you
could get some of your collaboration requirements from the cloud.
Finally it is wonderful opportunity for you to talk to some
of the Distinguished Engineers and IBM Fellows who can spend 1:1 time with you
to listen about your issues/problems as well as discuss the future roadmap. For
instance, Bala Rajaraman who is the Distinguished Engineer with
responsibilities including the architecture and design for Cloud &
Service Management solutions is going to be in India and it is your opportunity to
catch up with Bala.
Last but not the least, there is going to be Solution Expos
that will be setup for you, so you have a opportunity to touch and feel the
cloud solutions. This should include industry specific demos and
technology/product demos from IBM as well as partners.
So be there on Oct 19, 20that the IBM Software Universe 2011. It is
going to teach you a new skill – How to ride the next big wave… the cloud wave..
Join us for the Managing the Cloud Webcast series to learn more about best practices, technical approaches and capabilities to help solve your business and technical challenges in the cloud. Sign up for these free 1 hour webcasts today.
Organizations today are looking to cloud computing to deliver cost savings and faster service delivery. However, most organizations are still struggling to have the basic IT infrastructure that is necessary to take the leap to a robust cloud. This session will explain how service management can help provide the essentials to maintain service levels in the cloud and best practices based on IBM's work with customers. This information will provide the foundation for building and managing a cloud to meet your business objectives and transform IT.https://www14.software.ibm.com/webapp/iwm/web/signup.do?source=swg-tivoli-nov8managingcloud
As we discussed in the previous post, it is important that the all the
processes work together to bring successful automation in the cloud management
platform.A process workflow automation
engine is what makes this possible. In this chapter we will discuss more about Tivoli process automation
engine that’s form the base for IBM process automation in the cloud space.
process automation engine provides a user interface, configuration services, workflows and the common data system needed
for IBM Service Management products and other services. As we already know IBM
Service Management (ISM) is a comprehensive and integrated approach for
Service Management, integrating technology, information, processes, and people
to deliver service excellence and operational efficiency and effectiveness for
traditional enterprises, service providers, and mid-size companies. Tivoli process automation engine, previously known as Tivoli base services, provides
the base infrastructure for applications like Tivoli Maximo Asset Management,
Change and Configuration Manager Database (CCMDB), Tivoli Service Request
Manager (SRM), Tivoli Asset Management for IT (TAMIT), Tivoli Proivisioning
Manager as well as Tivoli Service Automation Manager. Any product that has the Tivoli process automation engine as its foundation can be
installed with any other product that has the Tivoli process automation engine.
Management that integrates and automates IT management processes
Management that integrates people, processes, information and technology
for real business results
Management to automate tasks to address application or business service
operational management challenges
Through having a common process automation engine, the
we can successfully link Operational and Business services with Infrastructure
through a single (J2EE) platform. We can also leverage current investments
through linking this engine with existing process automation technologies and
products. So by building a unified platform
to automate processes, we have taken data integration to the next level where sharing
data between applications has never been easier.This integrated process automation platform can
support the repeatable IT functions like Incident Management, Problem
Management, Change Management, Configuration Management all the way through to
Release Management. All of these processes tie into the CMDB where they share
consistent data via bidirectional integration. The platform supports best
practices such as ITIL and other Industry best practices. This facilitates an automated approach across
the IT management lifecycle. It's also forms the basis for automating
repetitive tasks that can be handled by the system instead of requiring human (costly)
intervention. TPAE through the adapters provide data federation from multiple
sources that you already have and translating the information into usable data
that can be leveraged by internal process and workflow.
Figure 1 Tivoli
process automation integrated portfolio
Brocade and Avnet Technology Solutions Bring Simplified Server and Desktop Virtualization Solutions to the Channel Through the Brocade CloudPlex Architecture
Integrated Solutions Provide Open, Validated Virtualization Stacks
LAS VEGAS, NV-- (MARKET WIRE) --08/30/11--VMworld 2011 --Brocade(NASDAQ: BRCD) andAvnet Technology Solutions, the global IT solutions distribution leader and operating group ofAvnet, Inc.(NYSE: AVT), today announced the joint development of marketing and enablement support for a new set of multi-vendor, pre-tested and configured virtualization solutions to help reseller partners in the U.S. andCanadadesign and deploy open, efficient and scalable virtualization solutions.
This joint effort will focus on building virtual compute blocks, which are integrated, tested and validated solution bundles comprised of server, virtualization, networking and storage resources and an integral element of the Brocade®CloudPlex™ architecture. This open, extensible architectural framework will help end-user organizations and solutions integrators build the next generation of distributed and virtualized data centers in a simple, evolutionary way.Avnetwill be providing integration services for these solutions, as well as other channel enablement and sales support.
At VMworld, Brocade will be introducing the first success of its joint development efforts withAvnet: a reference architecture and validated solution designed to cost effectively scale virtual desktop infrastructure (VDI) environments to support thousands of clients (or desktops) per solution bundle. VDI technology offers IT managers a simple and highly effective way to manage operating system and application updates throughout their enterprise, one of the most complex and time-consuming tasks they face in today's highly distributed and mobile computing model.
However, to scale properly, enterprise-wide VDI deployments should be built on top of an open, multi-vendor-based architecture that can scale easily and quickly migrate desktops and user data between data centers when needed, all while simplifying the management and configuration processes.
Specifically, this new set of virtualization solutions will incorporate Brocade and VMware networking and hypervisor technologies in conjunction with a number of compute and storage platforms. All virtualization solutions are qualified through a multistage validation process conducted by Brocade andAvnet, enabling value-added reseller and solutions integrator partners to deploy these solutions with confidence and the assurance of interoperability.
"The collaboration between Brocade andAvnetbrings forth best-in-class virtualization solutions designed to address the challenges facing resellers and their end customers today," said Barbara Spicek, vice president of Global Channels at Brocade. "This joint effort underscores Brocade's belief that multi-vendor technology integration must take place in the channel. Whether it be through a world-class distributor likeAvnetwhose integration capabilities are second to none, or via a reseller that specializes in virtualization and cloud technologies, the channel is where the 'rubber meets the road.' Recognizing this, Brocade is committed to working with channel partners to deliver viable, compelling technology offerings that arm resellers with what they need to remain differentiated in an increasingly competitive marketplace."
As the use of virtualization technology becomes increasingly prevalent, end customers are looking to the reseller community to provide the guidance and expertise on how best to deploy and integrate virtualization technology into their operations. The creation of fully validated, pre-configured, open standards-based virtualization technologies presents an attractive value proposition to resellers looking to quickly and seamlessly implement these solutions in multi-vendor IT environments. In addition, resellers will benefit from the technical know-how and enablement resources available throughAvnet'sdata center services offerings.
"While virtualization and cloud computing presents a world of opportunities to reseller partners, it also introduces new levels of complexity into their business and operations," saidScott Look, vice president and general manager of theTechnology Infrastructure Solutions Groupat Avnet Technology Solutions,Americas. "AtAvnet, we are committed to fully supporting our reseller partners and have evolved our business to be able to meet their changing needs through vertical industry-focused integration services, partner enablement programs and go-to-market assistance."
In addition,Avnet'sSolutionsPath® methodology provides resellers a wide range of tools to help them gain relevant technical skills and develop specialized solution-selling capabilities in vertical and technology markets.Avnet'sSolutionsPath includes data center technology practices around mobility, networking, security, storage and virtualization, along with vertical market practices for the energy, finance, government, healthcare and retail industries.Avnetresellers also have access to value-added supplemental resources, such as hosting live, online demonstrations from virtually anywhere by connecting to the Avnet Global Solutions Center inPhoenix, Ariz.
Earlier this year, Brocade andAvnetintroducedAvnet Accelerator, an invitation-only program designed to help resellers tap into the power of Ethernet fabric technologies to assist their customers in simplifying their network architectures and managing the growth of server virtualization and virtual machines inside their data centers. Today's announcement provides yet another opportunity for resellers to capitalize on the sale and deployment of virtualization solutions.
Availability and Additional Details
The Brocade/VMware VDI solution will be available to U.S. andCanadaresellers throughAvnetthis November. Additional multi-vendor, open standards-based virtualization solutions will be available in the following months.
For more information on how Brocade networking technologies can help customers transition to the cloud, please visitwww.brocade.com/cloud
About Brocade Brocade (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where applications and information reside anywhere. (www.brocade.com)
About Avnet Technology Solutions As a global IT solutions distributor, Avnet Technology Solutions collaborates with its customers and suppliers to create and deliver services, software and hardware solutions that address the business needs of their end-user customers locally and around the world. For fiscal year 2011, the group served customers and suppliers in more than 70 countries and generated US$11.5 billionin annual revenue. Avnet Technology Solutions (www.ats.avnet.com) is an operating group ofAvnet, Inc.
AboutAvnet, Inc. Avnet, Inc.(NYSE: AVT), a Fortune 500 company, is one of the largest distributors of electronic components, computer products and embedded technology serving customers in more than 70 countries worldwide.Avnetaccelerates its partners' success by connecting the world's leading technology suppliers with a broad base of more than 100,000 customers by providing cost-effective, value-added services and solutions. For the fiscal year endedJuly 2, 2011,Avnetgenerated revenue of$26.5 billion. For more information, visitwww.avnet.com.
Brocade, the B-wing symbol, DCX, Fabric OS, andSAN Healthare registered trademarks, and Brocade Assurance,Brocade NET Health, Brocade One, CloudPlex, MLX, VCS, VDX, and When the Mission Is Critical, the Network Is Brocade are trademarks of Brocade Communications Systems, Inc., inthe United Statesand/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners.
Brocade Unlocks the Power of the Cloud Through Open, Multi-Vendor Virtual Compute Blocks
Brocade and Its Partners Help Customers Build the Next Generation of Distributed and Virtualized Data Centers in a Simple, Evolutionary Way
LAS VEGAS, NV-- (MARKET WIRE) --08/30/11--(VMworld 2011) --Today at VMworld,Brocade(NASDAQ: BRCD), the leader infabric-baseddata center architectures, today announced significant advancements to the Brocade®CloudPlex™ architecturewith new Brocade Virtual Compute Blocks. These bundled solutions consist of integrated, tested and validated multi-vendor server, virtualization, networking and storage resources. Demonstrating substantial partner traction, the new solutions are available today, delivered and supported in collaboration with a wide range of alliance partners, includingDell, EMC, Fujitsu,Hitachi Data Systemsand VMware.
This open approach is an underlying tenet of the Brocade CloudPlex architecture, which was announced inMay 2011. The open, extensible framework is designed to help customers build the next generation of distributed and virtualized data centers in a simple, evolutionary way that preserves their ability to dictate all aspects of the migration. It is the foundation for integrated compute blocks and it supports existing multi-vendor infrastructure to unify customers' assets into a single compute and storage domain.
"Organizations are seeking to maximize the benefits of cloud computing through more efficient infrastructure procurement, pre-integrated components, faster support response, and greater choice in best-in-class products to meet specific business needs," saidJohn McHugh, CMO of Brocade. "Brocade Virtual Compute Blocks leverage our Ethernet fabrics and industry-leading Fibre Channel SAN fabrics to allow our partners to create integrated stacks that optimize cost effectiveness, flexibility and performance. Because these solutions are open, they allow our customers to scale components independently and better utilize legacy infrastructures."
According to IDC research, "As organizations move to create a dynamic data center enabled by virtualization, they are moving to architectures where server, storage, and network assets are in tighter alignment into converged infrastructures. IDC defines a converged infrastructure as one in which the server, storage, and network infrastructure resources are treated as pools to be assigned as needed to business services... The top benefits organizations achieve by implementing a converged infrastructure are cost savings, simplified management, better availability, increased flexibility, and higher utilization."(1)
Brocade Virtual Compute Block Partner Solutions Brocade Virtual Compute Block solutions include hypervisor software integrated with servers, storage and Brocade fabric networking products in bundled, pre-racked and pre-tested configurations enriched by technology from Dell, EMC, Fujitsu,Hitachi Data Systemsand VMware.
Dell Brocade and Dell have partnered to develop a reference architecture that includes Dell Compellent Fibre Channel storage, Dell PowerEdge servers, Brocade data center and SAN switches and the VMware hypervisor, which is being shown at the Brocade VMworld booth.
"Our reference architecture developed with Brocade demonstrates Dell Compellent's commitment to provide open, cloud-optimized solutions for our customers' increasingly dynamic requirements in Fibre Channel environments," saidPhil Soran, president of Dell Compellent. "Enterprises that deploy this reference architecture benefit from the ability to scale virtualization with their business requirements while deploying industry-leading storage from Dell Compellent and Fibre Channel networking solutions from Brocade."
EMC EMC and Brocade have joined forces with several partners to deliver Virtual Compute Blocks, which combine VMware virtualization software and management tools, EMC® VNXe™ unified storage, servers and integrated Brocade Fibre Channel and Ethernet fabric networking technologies. EMC and Brocade are now working with Arrow, Tech Data, First Distribution and Acao to deliver Virtual Compute Blocks in the U.S., and in parts ofEurope,Africa, andSouth America. These integrated, easy-to-install solutions enable EMC customers to quickly deploy private and hybrid cloud infrastructures, which provide data center consolidation, availability, scalability and automation.
"Our integration work with Brocade is a key enabler for our resellers in providing simplified deployment of Virtual Compute Blocks and further demonstrates our commitment to delivering cloud infrastructure solutions for our mutual customers that help transform data centers into highly efficient and agile environments," saidJosh Kahn, vice president of Solutions Marketing at EMC.
Fujitsu Fujitsu and Brocade have partnered to create solutions supporting Fujitsu's Dynamic Infrastructures architecture, which will help enterprises boost business agility, efficiency and IT economics. These are designed for data centers of the future, delivering powerful automated pools of computing resources made up of server, storage, network and virtualization technology.
"Fabric-based networks are an important requirement to successful deployments of solutions that will enable our customers to accelerate their cloud-based IT initiatives," saidJens-Peter Seick, senior vice president of theProduct Development GroupatFujitsu Technology Solutions. "We are pleased to add Brocade Ethernet fabric technologies to our portfolio, which enhances the long-term partnership we have had in deploying SANs for our customers' virtualized environments."
Hitachi Data Systems Hitachiconverged data center solutionscombine storage, compute and networking, with software management, automation and optimization to automate, accelerate and simplify cloud adoption. As a key networking partner, Brocade provides networking solutions for Hitachi converged data center solutions, including Ethernet switch, Fibre Channel fabric data center switches, and Fibre Channel switch modules for the Hitachi Compute Blade family. Solutions include:
Hitachi solutions built on Microsoft Hyper-V Cloud Fast Track: A combination of Hitachi storage and compute, with Brocade networking and Microsoft Windows Server 2008 R2 with Hyper-V andSystem Centerfor high-performance private cloud infrastructures and an avenue for further automation and orchestration.
Hitachi Unified Compute Platform: An open and converged platform that provides orchestration and management within the portfolio of Hitachi converged solutions for automated dynamic management of servers, storage and networking to create business resource pools from a simple, yet comprehensive interface.
Hitachi Converged Platform for Microsoft Exchange 2010: The first in a portfolio of pre-tested application-specific converged solutions, engineered for rapid deployment and tightly integrated with Exchange 2010's powerful new features for resilience, predictable performance and seamless scalability.
"HDS and Brocade have partnered to deliver tested and proven solutions with tightly integrated storage, compute and networking products that allow our mutual customers to benefit from Ethernet switch and Fibre Channel fabric technologies to create flexible cloud-based infrastructures," saidAsim Zaheer, vice president of Corporate and Product Marketing atHitachi Data Systems. "Through quicker deployment, automation and scalability, Hitachi converged data center solutions help organizations adopt cloud at their own pace and see predictable results and faster time to value."
VMware VMware and Brocade have developed a reference architecture solution that enables organizations to create a scalable virtual desktop infrastructure (VDI) environment.
The VMware/Brocade VDI reference architecture,VMware View™, combines Brocade VDX data center switches and converged network adapters, Intel x-86-based rack servers, iSCSI-based storage and TrendMicro security software.
Benefits of the VMware/Brocade VDI solution include best-in-class performance and scalability, enhanced security, ease-of-migration and lower total cost of ownership.
"VMware and Brocade have collaborated on a joint VDI solution that addresses our customers' needs to improve business productivity though increased performance, secured client access and elimination of business disruptions," saidVittorio Viarengo, vice president of End-User Computing at VMware. "IT organizations can utilize our reference architecture to deploy a quick-start configuration within their data center or at remote locations. In addition, it can be used as a test or development platform for businesses eager to gain the benefits and advantages of virtualizing user desktops."
Avnet Virtual Compute Block Solutions Separately today at VMworld, Brocade and Avnet announced the joint development of marketing and enablement support for a new set of multi-vendor, pre-tested and configured virtualization solutions. The first of these is a reference architecture and validated solution designed to cost effectively scale virtual desktop infrastructure (VDI) environments to support thousands of clients (or desktops) per solution bundle. The VDI bundle will help Avnet reseller partners design and deploy open, efficient and scalable virtualization solutions for their end customers by incorporating Brocade and VMware networking and hypervisor technologies in conjunction with a variety of compute and storage platforms.
About Brocade Brocade (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where applications and information reside anywhere. (www.brocade.com)
Brocade, the B-wing symbol, DCX, Fabric OS, andSAN Healthare registered trademarks, and Brocade Assurance,Brocade NET Health, Brocade One, CloudPlex, MLX, VCS, VDX, and When the Mission Is Critical, the Network Is Brocade are trademarks of Brocade Communications Systems, Inc., inthe United Statesand/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners.
VMware, VMware View and VMworld are registered trademarks and/or trademarks of VMware, Inc. inthe United Statesand/or other jurisdictions. The use of the word "partner" or "partnership" does not imply a legal partnership relationship between VMware and any other company.
In a cloud service provider environment, there are various
business processes and compliance that needs to be addressed before the
environment can go live / operational. The following are the areas that need to
be designed are the following:
the cloud service requirements
list of Infrastructure Services
list of Platform Services
list of Software Services
Definitions & Non Functional Requirements
of Current IT Environment
of External/Existing Systems & Capabilities to integrate with from a
Business Support and Operations Support perspective like Billing,
number of technical environments needed like Demo, Dev, Staging,
Scalability, Capacity Requirements
of the Management Platform
Recovery of Management Platform
Process for managed servers
and Restore of Images
Roles & Locations
& Approver Administration Workflow
needed for Images
Interface Changes Requirement for Self Service Offering.
and Operational Service Model
Operations& SLA Management
& configuration Management
Problem and Defect Management
the management platform
Scale & Growth Workflow
IBM's strategy differentiates from other vendors in that it
is focused on bridging business and IT processes using a common software
framework with common services, including process automation and security
Service Management is built on the Tivoli Service Management
Platform and wrapped with best practices, methodologies and services,
to help you deliver services to your customers effectively and efficiently.
We provide an Integrated Solution that represents the full
management of data, processes, tooling and people. The key differentiator is a
common data model that all the core solutions can share for simple data sharing.
It is important that the all the processes work together. A process workflow
automation engine is what makes this possible. We will discuss more about this
common workflow process automation engine in the next post.
Leading Fuel Card Provider Values Brocade Market Leadership, Reliability and Network Security
SAN JOSE, CA -- (MARKET WIRE) -- 07/19/11 --
Brocade (NASDAQ: BRCD) today announced that FleetCor,
a leading independent global provider of specialized payment products
and services to businesses, commercial fleets, major oil companies,
petroleum marketers and government fleets, has selected Brocade as the
vendor to build its cloud-optimized
network. This new network enhances FleetCor's ability to securely
process millions of transactions monthly and ultimately better serve its
commercial accounts in 18 countries in North America, Europe, Africa and Asia.
Millions of commercial payment cards are in the hands of FleetCor
cardholders worldwide, and they are used to purchase billions of gallons
of fuel per year. Given this volume of network-based transactions, network reliability, scalability and security were critical factors for FleetCor to consider in its selection process to maintain superior customer satisfaction.
In addition, FleetCor selected Brocade as its networking expert to help
evolve its data center and IT operations into a more agile private cloud
infrastructure. Brocade® cloud-optimized networks
are designed to reduce network complexity while increasing performance
and reliability. Brocade solutions for private cloud networking are
purpose-built to support highly virtualized data centers.
"When we evaluated networking vendors to build our private cloud, we
looked at market leadership and non-stop access to critical data," said
Waddaah Keirbeck, senior vice president global IT, FleetCor. "Brocade
cloud-optimized networking solutions are perfect for our data centers
because they allow us to optimize applications faster, virtually
eliminate downtime and help us meet service level agreements for our
customers. Moving to a cloud-based model also provides us the
flexibility to make adjustments on the fly and access secure information
virtually anywhere and anytime."
FleetCor installed a Brocade MLXe router for each of its three data
centers, citing scalability as a major driver for the purchase. This
approach enables FleetCor to virtualize its geographically distributed
data centers and leverage the equipment it already has, at the highest
level, to achieve maximum return on investment. The Brocade MLXe
provides additional benefits for FleetCor by using less power and has a
smaller footprint than competitive routers; critical in power-and
space-constrained locations in order to allow for growth. The Brocade
MLXe also enables continuous business operation for FleetCor based on
Multi-Chassis Trunking, massive scalability supporting highest 100 GbE
density in the industry with no performance degradation for advanced
features like IPv6 and flexible chassis options to meet network and
The Brocade ServerIron ADX
Series of high-performance application delivery switches provides
FleetCor with a broad range of application optimization functions to
help ensure the reliable delivery of critical applications.
Purpose-built for large-scale, low-latency environments, these switches
accelerate application performance, load-balance high volumes of data
and improve application availability while making the most efficient use
of the company's existing infrastructure. It also delivers dynamic
application provisioning and de-provisioning for FleetCor's highly
virtualized data center, enables seamless migration and translation to
IPv6 with unmatched performance.
As an added benefit for its bottom line, through the use of Brocade ADX Series switches and Brocade MLX™ Series routers
FleetCor has eliminated thousands of costly networking cables, saving
it hundreds of thousands of dollars and allowing the company to segment,
streamline and secure its network. FleetCor has also been able to
easily integrate Brocade network technology with third-party offerings
already installed in the network, for complete investment protection.
FleetCor anticipates moving to 10 Gigabit Ethernet (GbE) solutions for
its backbone switch in the near future.
"We wanted a dependable, secure, redundant, 24 by 7 backbone switch in
each of our data centers to help us leverage the benefits of cloud
computing and the Brocade MLXe delivered on all fronts," said Keirbeck.
"By virtualizing our data center, Brocade allows for non-stop access to
the mission-critical data that FleetCor and its customers rely on every
day. We chose the Brocade MLXe because of the tremendous results we
already saw from our existing Brocade solutions and the exceptional
support and service."
According to a report from analyst firm Gartner, "Although 'economic
affordability' is an immediate, attractive benefit, the biggest
advantages (of cloud services) result from characteristics such as
built-in elasticity and scalability, reduced barriers to entry,
flexibility in service provisioning and agility in contracting."(1)
Social Media Tags: Brocade, LAN, Local Area Network, ADX, ServerIron, MLX, MLXe, reliability, scalability, security
(1)Gartner " Cloud-Computing Service Trends: Business Value Opportunities and Management Challenges, Part 1" February 23, 2010
About Brocade Brocade, the B-wing symbol, DCX, Fabric OS, and SAN Health are registered trademarks, and Brocade Assurance, Brocade NET Health,
Brocade One, CloudPlex, MLX, VCS, VDX, and When the Mission Is
Critical, the Network Is Brocade are trademarks of Brocade
Communications Systems, Inc., in the United States
and/or in other countries. Other brands, products, or service names
mentioned are or may be trademarks or service marks of their respective
Brocade, the B-wing symbol, DCX, Fabric OS, and SAN Health are registered trademarks, and Brocade Assurance, Brocade NET Health,
Brocade One, CloudPlex, MLX, VCS, VDX, and When the Mission Is
Critical, the Network Is Brocade are trademarks of Brocade
Communications Systems, Inc., in the United States
and/or in other countries. Other brands, products, or service names
mentioned are or may be trademarks or service marks of their respective
Notice: This document is for informational purposes only and does not
set forth any warranty, expressed or implied, concerning any equipment,
equipment feature, or service offered or to be offered by Brocade.
Brocade reserves the right to make changes to this document at any time,
without notice, and assumes no responsibility for its use. This
informational document describes features that may not be currently
available. Contact a Brocade sales office for information on feature and
product availability. Export of technical data contained in this
document may require an export license from the United States government.
Note: This is a (slightly updated) re-post from a personal blog - just my view in the context of IBM's drive to foster open choice and collaboration. Please bear in mind that this is based on my personal thoughts (not an official IBM position) and read the article as it is intended to be - thought provoking ... enjoy!
Having returned from the European Red Hat Partner summit and the VMware vForum where I presented on behalf of IBM, it took me a while to digest the “openness” of it all …so let me share my thoughts retrospectively. The key messages conveyed in both events were (un?)surprisingly similar, considering that we have a major opens source software company on one side and a more traditional “business” model on the other.
Being proprietary rocks…! (?) Let’s be straight – one could argue that in an ideal world (for selfish, money-making businesses without ethics) there would be no open source, being proprietary rocks! After all making money by attracting and “retaining” clients (I’m deliberately not saying “locking them in”) is ultimately the goal of every business – and that (the attracting/retaining clients bit) actually applies to VMware in the same way as Redhat (if we don't mix up the ‘open source community’ with Redhat as a business) … Now that would obviously completely ignore the power and dynamics of an open technical community but more importantly that’s not in the interest of the consumer… Public cloud promises to empower the consumer – so they will increasingly be looking for choice … no capital dependency, outsourced, pay per use service operation models enable you (in theory!) to switch providers like I just switched my energy and gas supplier to XXX last week – go to a comparison site, find the best deal and “click” … done (obviously not reality today with cloud).
Public cloud can only exist on open source … ? What both events made crystal clear is that increasingly many “traditional” businesses will be forced to have a foot in both camps in order to balance customer demand for open choice with a business model allowing them to make money and retain customer “affinity” (otherwise we probably wouldn't see URLs like this …http://www.microsoft.com/opensource/
There was a bold statement by a speaker at the Red Hat summit: “Public cloud can only live on open source!” I was initially inclined to agree but then thought this through again and adjusted it mentally to what I believe to be more appropriate: “public clouds need to live on INTEROPERABLE source”… Open source should of course help to facilitate this but if I just end up with a bunch of non-intuitive, non-integrated code, with undocumented APIs and outlandish image formats then the fact that its open source doesn’t help me at all. So I am not saying that I don’t believe in open source, actually quite the opposite, all I’m saying is that the “open source” stamp on its own is not good enough and as a consumer of resources (not a developer) I would indeed consider a proprietary solution as long as it is intuitive, cheap, with well-documented APIs and – that is the key – inter-operable with other public providers. So it is important to understand the difference between open source and open standards.
The Public cloud is only as good as the “connectors” to it - Key Battle 1: Hybrid Connectors VMware very much provides the majority of today’s x86 virtual enterprise footprint (a good chunk of that on IBM infrastructure and IBM very closely partnering with VMware). With that VMware has potentially a critical control point in the private cloud. The Public cloud is a completely different story with over 80% being OSS based and VMware yet hardly to be seen! So especially for VMware it must be of utmost importance to provide a ‘best of breed’ connector between existing vSphere infrastructures and public vCloud Director resources before others provide this linkage to other (non-VMware) public platforms. So I expect a lot of focus on vCloud Connector functionality from VMware (in the same way as on ‘Concero’ from Microsoft). VMware’s strategy therefore is to entice Service Providers to take advantage of the existing vSphere footprint “Hey look, many of your customers already have VMware, the only thing you need to do is to provide public vCloud Director resources for them to burst out to – we provide the connector, it’s as simple as this!” Now, that might sound great but the main concern for me (the consumer) simply is how much of a dependency is being created for me by doing this, how easy can I go and "click" to switch to a Amazon, IBM or Rackspace cloud once I am in that environment ...? So there clearly is a chance to develop a public VMware cloud ecosystem around vCD in this way – but how long before someone else offers seamless alternatives (more than just Amazon’s VM Import)? So will it be enough to only provide linkage to publicVMware vCD resources? IMHO absolutely not. I am very curious to see how much VMware will enable connectivity to other public provider platforms going forward … Again, it will be a fine balancing act but I’m convinced that it won’t be successful otherwise.
In the meanwhile keep your eyes peeled and expect the industry to increase focus on enabling hybrid connectors - I obviously can't make any specific forward-looking statements from an IBM perspective. But just take Red Hat as an example, it made clear that CloudForms (their IaaS platform) can indeed manage VMware though their DeltaCloud driver and – while currently positioning CloudForms for private and hybrid – their vision (of course) is for DeltaCloud to be the top-level public layer linking into private (or public) VMware clouds.
Key Battle2: PaaS Now – here’s another (the real?) battle for Cloud control (or better ‘ecosystem control’) … Who will provide the application platform for these future cloud-based applications? Who will control the ecosystem of future application suites? Who will be the next "Microsoft" you might ask? A lot will be control points and the pain of moving. If switching public cloud providers could really be as easy as switching utility providers, switching your application platform (e.g. as ISV) is rather like moving house! Using open standards is a great value proposition here and it’s not just the OSS providers who have realised this …
Red Hat recently announced their hosted “OpenShift” PaaS platform which essentially allows developing and running Java, Ruby, PHP and Python applications and comes in 3 different editions. From 1) “Express” (free) which provides a runtime environment for simple Ruby, PHP and Python apps over 2) “Flex” for multi-tiered Java and PHP apps with more options (like mySQL DBs and JBoss middleware) to full control with the “Power” edition supporting “any application or programming language that can compile on RHEL 4, 5, or 6″ and enables to deploy apps directly on EC2 and (in the near future) to IBM’s SmartCloud.
VMware had before announced their own open “Cloud Foundry” PaaS project, it has incarnations as fully hosted service (currently in beta), as open source project (CloudFoundry.org) or a free single PaaS instance for local development use. An interesting move IMHO which could help the adoption of this layer for VMware (away from e.g. MS Azure, Google’s App Engine or Amazons’ Elastic Beanstalk).
So what's IBM doing in this space? IBM has recently announced the IBM Workload Deployer - an evolution of the WebSphere CloudBurst hardware appliance. It essentially stores and secures "WebSphere Application Server Hypervisor Edition images" and more importantly workload patterns which can be published into a cloud. These workload patterns (think of them as customizable templates that capture settings, dependencies and configuration required to deploy applications) enable you to focus on what essentially differentiates PaaS from IaaS ... the application rather than the infrastructure. Dustin Amrhein explains this much better than me in this little blog. Importantly, all this comes with a REST APIs that allow for standards-based integration into existing environments, including Tivoli. If you have only 10 minutes to spare I can only recommend to watch this great video from Cloud Jason (there are 3 more) ... I promise you will get a really good idea what IWD can do!
Professional Suicide So, yes, I honestly believe that KVM has a good chance to become hypervisor of choice for public cloud. However … that is unlikely to be the control point… . So which management platform(s) will take that all important crown …? Will it be an OSS based one? I don’t want to hazard a guess, there are many …and that is part of the problem, many argue that the open source “communities” will have to overcome a challenge and become a COMMUNITY if they want to succeed. ESX could not be beaten with 7 or 8 different (but weak) flavours of Xen and that was just a single OSS project splintered by commercial offerings … in the same way the sea of OSS based cloud controllers with eucalyptus, openstack, cloudstack, deltacloud, opennebula faces focussed (more proprietary) heavy-weights like Microsoft, Google and Amazon. The increasing number of OSS management solutions and “open bodies” will also make e.g. VMware less nervous than intended as long as they indirectly compete with each other …
BUT (and it’s a big “but”) I would argue that anyone not strategically looking at these open solutions is at best ignorant or – e.g. if you are a service provider yourself – more likely long-term professionally suicidal … yes, in an ideal world everyone wants ‘today’s best of breed' but more critically you have to maintain your negotiation potential through the ability to switch and if only for that reason alone you need to keep your options open! It will be of the utmost importance to partner with solution providers who share this mind-set and have the capability and strategy to support such a long-term goal and yes, IBM is clearly uniquely positioned to fulfill this role. And while I spoke to many completely different clients at both events, that was a common concern raised by most of them.
Industry endorsement like the recent OVA announcement - with IBM being a major driving force and supporter - will help to give KVM the needed credibility and weight … I am looking forward to seeing these visions translated into tangible solutions.
Great Video. There are a great many folks that have already started making the journey into the clouds and are not fully aware; If you consider that most of all large Enterprise Data centers are consolidating and visualizing servers, storage and networking today, and after all, when you get all 3 of those areas consolidating and visualizing you are transforming business processes and will eventually reach a point when infrastructure/information on demand will be the next logical step.
Cloud Service Provider Platform (CSP2) is a carrier grade cloud offering
that contains enhancements over the base ISDM solution to provide a
multi-tenancy environment that allows both internal and external users to exist
on the same cloud and management platforms. IBM's new CSP2 platform provides
cloud services such as desktop management to influence the cloud based business
strategy of communications service providers.
Cloud Service Provider Platform is specifically tailored to the needs of CSPs
and is designed to help them successfully:
Create cloud services that
harness the strengths of a diverse partner ecosystem and rapidly enable
applications and solutions to extend their market reach.
Manage cloud services quickly
and easily with an open, carrier-grade, secure, scalable, automated and
integrated service management solution.
Monetize cloud services by
leveraging business intelligence and analytics to achieve differentiation,
maximize revenue and enhance the customer experience.
Figure 1 IBM Integrated Service
Management Solution for Cloud Service Providers
Communications service providers (CSPs) around the world are
looking for smarter ways of doing business. They are being challenged to
transform the way services are created, managed, and delivered. CSP2 neatly
integrates and extends the SPDE (Service Provider Delivery Environment) for
Communication Service Providers to build the ecosystem to become a cloud
service provider.For a cloud based
business strategy - check out the video from Scott on the
value of CSP2 for CSPs.
In this article learn how to:
Set up a 64-bit Linux instance (a Bronze-level offering) with the Linux Logical Volume Manager (LVM). Capture a private image and provision it as a new Platinum instance. Grow the LVM volume and file system to accommodate the new physical volumes. Configure LVM across physical volumes using Linux LVM-type partitions. Background on LVM and the test scenario
First, a description of LVM concepts and the test scenario for those who may not be familiar with LVM.
Note: You are about to configure Linux LVM: Here be Dragons. Mind the gap.
The Linux LVM is organized into physical volumes (PVs), volume groups
(VGs), and logical volumes (LVs)
Physical volume: Physical HDDs, physical HDD partitions (such as /dev/vdb1).
Extents: PVs are split into chunks called PEs
Logical extents (LEs) map 1:1 to PEs and are used for the physical-to-logical volume mapping.
Volume group: A virtual disk consisting of aggregated
physical volumes. VGs can be logically partitioned into LVs.
Logical volume: Acts as a virtual
disk partition. After creating a VG you can create LVs in that VG.
They can be used as raw block devices, swap devices, or
for creating a (mountable) file system just like disk partitions.
File system: LVs can be used as raw devices or swap, but are more commonly "formatted"
with a supported file system and mounted to defined mountpoint. I'll format the LV as an ext3 file system in this scenario.
Partition table: You'll use tools like fdisk, sfdisk, or
cfdisk to manipulate the block device partition table and create Linux LVM (8e) type partitions.
management platform sizing means sizing for the following components that provides
the functional capabilities
Service Request Management
Service Monitoring &
Service Level Management
Service Usage & Accounting
sizing will be affected based on the non-functional consideration that needs to
be addressed by each of these components of the management platform. One should review the performance reports and workload pattern/handling capabilities of each of the products selected to
validate the sizing considered can meet the non-functional requested by the solution.
The size of the management platform depends on the size of the managed environment. It is
preferred to keep a centralized management environment and scale it as needed
when the managed environment grows. This is often not an easy calculation or simple process. Need to apply pure engineering to plan the capacity for each capabilities. Apart from the capabilities discussed above, the following key areas also needs to be covered
Service Availability Management
In order to size for all these capabilities you need to have answers for some very critical questions. The right sizing and capacity planning depends how good the answers for the following questions can be provided by the project. For example
What operations are expected to be performed with management platform?
What are the average and peak concurrent administrator workloads?
What is the enterprise network topology?
What is the expected workload for provisioned virtual servers, and how do they map to the physical configuration?
For the provisioned servers: What is the distribution size?
What are the application service level requirements?
High Availability (HA) consideration is another important aspect to include in the capacity planning. The management platform has to be designed for HA with appropriate policies defined.
This IBM® Redpaper™ publication introduces PowerVM™ Active Memory™ Sharing on IBM Power Systems™ based on POWER6® and later processor technology. Active Memory Sharing is a virtualization technology that allows multiple partitions to share a pool of physical memory. This is designed to increase system memory utilization, thereby enabling you to realize a cost benefit by reducing the amount of physical memory required.
The paper provides an overview of Active Memory Sharing, and then demonstrates, in detail, how the technology works and in what scenarios it can be used. It also contains chapters that describe how to configure, manage and migrate to Active Memory Sharing based on hands-on examples.
The paper is targeted to both architects and consultants who need to understand how the technology works to design solutions, and to technical specialists in charge of setting up and managing Active Memory Sharing environments. For performance related information, see: ftp://ftp.software.ibm.com/common/ssi/sa/wh/n/pow03017usen/POW03017USEN.PDF
Dubuque, Iowa and IBM Combine Analytics, Cloud Computing and Community Engagement to Conserve Water
DUBUQUE, Iowa, - 20 May 2011:The City of Dubuque and IBM (NYSE:IBM) today announced that the IBM analytics and cloud computing technology deployed in 2010 by Dubuque as part of its Smarter Sustainable Dubuque research helped reduce water utilization by 6.6 percent and increased leak detection and response eightfold.
The Smarter Sustainable Dubuque Water Pilot Study empowered 151 Dubuque households with information, analysis, insights and social computing around their water consumption for nine weeks. By providing citizens and city officials with an integrated view of water consumption, the Water Pilot resulted in water conservation, increased leak reporting rate, and encouraged behavior changes.
Water savings were measured by comparing the consumption of the 151 pilot households with another 152 control group households with identical smart meters but without the access to the analysis and insights provided by the Water Pilot Study for the nine-week duration.
The smarter meter system monitored water consumption every 15 minutes and collected and communicated to the IBM Research Cloud. Data was collected from information including weather, demographics, and household characteristics. Using cloud computing, the data was analyzed to trigger notification of potential leaks and anomalies, and helped volunteers understand their consumption in greater detail. Volunteers were only able to view their own consumption habits while city management can see the aggregate data. All participating homes were volunteers and the data being collected was anonymous and contained no confidential information.
Participating households were alerted about potential anomalies and leaks and were able to get a better understanding of their consumption patterns and, compare and contrast it anonymously with others in the community. Pilot study participants accessed their personal water usage information through a website portal and participated in online games and competitions aimed at promoting sustainable behavior enabling them to become fully engaged and informed about their consumption and the impact of the changes they made to it. Participants were able to see their data expressed in dollar savings, gallon savings and carbon reduction.
A cloud is not a cloud if it is not elastic. The elastic
property of the cloud to expand and shrink based on demand is possible only
with a proper capacity planning. I feel the most difficult exercise to do while
making a cloud solution is capacity planning for your cloud.By this, I mean you have to size
managed environment as well as
Most of the engagements that I’ve walked into might have
some capacity or infrastructure that they want us to leverage and use it in the
cloud.So the comparison becomes
difficult if you don’t have a standard measuring unit for your infrastructure –
for instance how do you know a Quadcore
on an intel platform compares to power7 core. So I found a good explanation in
this guide, in this interesting article –
The answer to the difficult question was to use something
called the cloud CPU unit which is
nothing but the computing power equal to the processing power on a one
gigahertz CPU. When a user requests two CPUs, for example, they will get the
processing power of two 1 GHz CPUs. This means that a system with two CPUs,
each with four cores, running at 3 GHz will have the equivalent of 24 CPU units
(2CPUs x 4Cores x 3GHz = 24CPU Units).
The other dimension of the complexity is to determine the
resource needs and do the trends and forecasting. I typically collect the
projections from the clients and then put down some critical assumptions to
determine how big my cloud should be. Some critical questions that I typically
many concurrent users and peak users and what percentage of these users
needs to be covered?
type of workloads they typically run – development, test ?
image attributes – mem, cpu, storage etc
infrastructure planner for cloud made life easy for me that had a user
friendly interface to take me through these steps and arrive at a sizing for
the managed environment. Once we know
the managed environment, we can make
the sizing of the management platform. The details of how to plan the managed
environment, I’ll discuss in my next post.
I’ll be interested in putting together the top 10 parameters
that are critical for sizing the cloud managed and management environment. Look forward to your comments.
In Collaboration With Ixia, Brocade Will Demonstrate the
Performance, Reliability and Advanced Feature-Set of the Industry's
First 100 GbE Terabit-Trunk Router
LAS VEGAS, NV -- (MARKET WIRE) -- 05/09/11 --
- INTEROP 2011 -- Brocade (NASDAQ: BRCD) today announced that it will work with Ixia
(NASDAQ: XXIA) to replicate mission-critical service provider
environments and test high-capacity Brocade® Ethernet network solutions
designed to help service providers become cloud-optimized. The
demonstration creates a true-to-life service provider infrastructure
scenario for increasing IPv4/IPv6 routing scalability within the core
Multiprotocol Label Switching (MPLS) network while retaining high
service levels for end customers. The demonstration will be held in the
Brocade booth (# 833) during Interop Las Vegas 2011, at the Mandalay Bay Convention Center.
As service providers evolve to become destinations offering cloud-based
services, rather than just basic data delivery, the performance and
scalability demands on their networks have increased significantly. The
Brocade MLXe Core Router is a 100 Gigabit Ethernet (GbE)-ready solution
that enables service providers and virtualized data centers to support
these demands by efficiently delivering cloud-based services that use
less infrastructure and help reduce expenditures.
In this specific demonstration, Brocade and Ixia
will test the IPv4/IPv6 traffic flows, MPLS and throughput capabilities
of the Brocade MLXe multiservice router over 10 and 100 GbE
connections. By leveraging Ixia's leading test solutions, attendees will be able to view the following:
Ixia IxNetwork application emulating a large-scale Layer 3 virtual
private network (VPN) topology surrounding the Brocade MLXe router
Brocade MLXe router maintaining forwarding information and peering relationships
IxNetwork generating line-rate traffic sourced and destined over K2
100 GbE ports and Xcellon-Flex™ 10 GbE ports to fully load the Brocade
MLXe router, showcasing the forwarding plane performance
IxNetwork's real-time flow statistics and detailed reporting tools
validating the scalability of the Brocade MLXe peering sessions and
control plane scalability
Brocade MLXe forwarding low-latency traffic to all destination routes
Repeatable and scalable testing using Test Composer automation built into IxNetwork
About Brocade Brocade (NASDAQ: BRCD) networking solutions help the world's
leading organizations transition smoothly to a world where applications
and information reside anywhere. (www.brocade.com)
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron, SAN Health, ServerIron, TurboIron, and Wingspan are registered trademarks, and Brocade Assurance, Brocade NET Health, Brocade One, Extraordinary Networks, MyBrocade, VCS, and VDX are trademarks of Brocade Communications Systems, Inc., in the United States
and/or in other countries. Other brands, products, or service names
mentioned are or may be trademarks or service marks of their respective