Cloud Computing Central
I'll make no bones about the fact that I'm a huge fan of Cloud Foundry. It's the right play, by the right people at the right time. Despite all the attempts to dilute the message over the last eleven years, Platform as a Service (or what was originally called Framework as a Service) is about write code, write data and consume services. All the other bits from containers to the management of such are red herrings. They maybe useful subsystems but they miss the point which is the necessity for constraint.
Constraint (i.e. the limitation of choice) enables innovation and the major problem we have with building at speed is almost always duplication or yak shaving. Not only do we repeat common tasks to deploy an application but most of our code is endlessly rewritten throughout the world. How many times in your coding life have you written a method to add a new user or to extract consumer data? How many times do you think others have done the same thing? How many times are not only functions but entire applications repeated endlessly between corporate's or governments? The overwhelming majority of the stuff we write is yak shaving and I would be honestly surprised if more than 0.1% of what we write is actually unique.
Now whilst Cloud Foundry has been doing an excellent job of getting rid of some of the yak shaving, in the same way that Amazon kicked off the removal of infrastructure yak shaving - for most of us, unboxing servers, racking them and wiring up networks is a thankfully an irrelevant thing of the past - there is much more to be done. There are some future steps that I believe that Cloud Foundry needs to take and fortunately the momentum is such behind it that I'm confident of talking about them here without giving a competitor any advantage.
First, it needs to create that competitive market of Cloud Foundry providers. Fortunately this is exactly what it is helping to do. That market must also be focused on differentiation by price and quality of service and not the dreaded differentiation by feature (a surefire way to create a collective prisoner dilemma and sink a project in a utility world). This is all happening and it's glorious.
Second, it needs to increasingly leave the past ideas of infrastructure behind and by that I mean containers as well. The focus needs to be server less i.e. you write code, you write data and you consume services. Everything else needs to be buried as a subsystem. I know analysts run around going "is it using docker?" but that's because many analysts are halfwits who like to gabble on about stuff that doesn't matter. It's irrelevant. That's not the same as saying Docker is not important, it has huge potential as an invisible subsystem.
Fourth, and most importantly, it needs to tackle yak shaving at the coding level. The simplest way to do this is to provide a CPAN like repository which can include individual functions as well as entire applications (hint. Github probably isn't upto this). One of the biggest lies of object orientated design was code re-use. This never happened (or rarely did) because no communication mechanism existed to actually share code. CPAN (in the Perl world) helped (imperfectly) to solve that problem. Cloud Foundry needs exactly the same thing. When I'm writing a system, if I need a customer object, then ideally I should just be able to pull in the entire object and functions related to this from a CPAN like library because lets face it, how many times should I really have to write a postcode lookup function?
But shouldn't things like postcode lookup be provided as a service? Yes! And that's the beauty.
By monitoring a CPAN like library you can quickly discover (simply by examining meta data such as downloads, changes) as to what functions are commonly being used and have become stable. These are all candidates for standard services to be provided into Cloud Foundry and offered by the CF providers. Your CPAN environment is actually a sensing engine for future services and you can use an ILC like model to exploit this. The bigger the ecosystem is, the more powerful it will become.
I would be shocked if Amazon isn't already using Lambda and the API gateway to identify future "services" and Cloud Foundry shouldn't hesitate to press any advantage here. This process will also create a virtuous cycle as new things which people develop that are shared in the CPAN library will over time become stable, widespread and provided as services enabling other people to more quickly develop new things. This concept of sharing code and combing a collaborative effort of the entire ecosystem was a central part of the Zimki play and it's as relevant today as it was then. By the way, try doing that with containers. Hint, they are way too low level and your only hope is through constraint such as that provided in the manufacture of uni-kernels.
There is a battle here because if Cloud Foundry doesn't exploit the ecosystem and AWS plays its normal game then it could run away with the show. The danger of this seems slight at the moment (but it will grow) because of the momentum with Cloud Foundry and because of the people running the show. Get this right and we will live in a world where not only do I have portability between providers but when I come to code my novel idea for my next great something then I'll discover that 99% of the code has already been done by others. I'll mostly need to stitch all the right services and functions together and add a bit extra.
Oh, but that's not possible is it? In 2006, Tom Inssam wrote for me and released live to the web a new style of wiki (with client side preview) in under an hour using Zimki. I wrote an internet mood map and basic trading application in a couple of days. Yes, this is very possible. I know, I experienced it and this isn't 2006, this is 2016!
Cloud Foundry (with a bit of luck) might finally release the world from the endless Yak shaving we have to endure in IT. It might make the lie of object re-use finally come true. The potential of the platform space is vastly more than most suspect and almost everything, and I do mean everything will be rewritten to run on it.
I look forward to the day that most Yaks come pre-shaved. For more read....
Microservice architecture resembles a Service Oriented architecture in the part that both rely on cohesive
loosely coupled services, strung together to provide a solution. Beyond this similarity, the common nature
of the architecture seems to end. Microservice architetcure consists of completely decoupled services orchestrated
with each other via REST+HTTP API interfaces. These services can each be running in its own environment including
different programming languages. Each service can have a different deployment/management cycle while keeping the
final solution consistent.
Docker is a technology that is naturally suited for building an Application with a Microservice architecture.
You can visualize Docker as a wafer thin Linux VM that can host multiple containers without a tight dependency
on the host Operating System. Compared to a regular VM, it is light weight and more manageable. You can have
Microservices loaded into individual containers each completely isolated in environment from each other.
Besides Isolation, Docker also provides a consistent environment between code movement from Development -> QA
-> Production. Developers can have development time Docker containers that run on minimal hardware resources
and have the same code deployed consistently in maximized Production environments including Cloud and PAAS/IAAS
infrastructures. Same code, different scale!!!
With that in mind, I would like to cover the implementation details associated with developing Java Microservices
in a Docker environment.
This step involves developing the Java Microservice. Keeping the scope of the blog in mind, this microservice can be downloaded and run with the following instructions
Implementation details about the Microservice can be studied in the source code by loading the project into your preferred Java IDE such as Eclipse.
Before the Microservice can be run inside Docker, the Docker technology must be installed on your local machine. You can follow step-by-step Docker installation procedure at: Docker Installation
Once Docker is installed correctly, you can test your installation using the following command:
Create a Microservice Docker Image
In the Docker ecosystem, there are two main concepts to understand.
For the above microservice, the container loads the microservice image, and as part of this image it not only loads the Application Code for the microservice, but also the Java 8 environment it needs to run the microservice.
But, before you can load the microservice into Docker, you need to create a Docker image for that software. The steps to create the image are as follows:
The Docker build process uses a file named Dockerfile to get its instructions about what to do when building an image. In this particular microservice, the Dockerfile instructs the Docker system to download an image called 'java:8'. This is the core infrastructure needed to run the microservice. Next it adds the microservice jar and configuration to the image. And later, it exposes the ports 9000 and 9001 to service the requests.
docker build -t hello-microservice-local . (is the command that processes the Dockerfile and produces the hello-microservice-local image)
Note: make sure this command is issued from the Docker session and not just any command line session.
Once this Java Microservice Docker image is created, it must be run inside a Docker container using the following command:
You can test the Microservice in the browser using: http://localhost:9000/java/microservice
Once this works, you can stop the microservice using: docker stop hello-microservice-local
Publish the Microservice Docker Image
Now that you have the Microservice Docker Image working locally, you can publish this image to DockerHub to share with your team. This can be accomplished as follows:
Steps to post your local image to the remote repository
Before testing the remote image, you need to delete the local images. Get the image id for both 'hello-microservice-local' and 'hello-microservice-remote' using: docker images
and remove the two images using the command: docker rmi -f imageid
Once, the images are removed, you can test the remote image using the following command:
aairom 120000DAGK 1,748 Views
Ecosystem development cloud France will receive IBM business partners on March 5th 2015.
All information is available on the following site: http://www-01.ibm.com/software/fr/channel/KO_BP2015/
The agenda in described here: http://www-01.ibm.com/software/fr/channel/KO_BP2015/agenda.html and the workshops here: http://www-01.ibm.com/software/fr/channel/KO_BP2015/ateliers.html
We will be happy to welcome you for face to face discussions.
Alain Airom (cloud solution architect).
With the recent exploration of cloud computing technologies, organizations are using cloud service models like infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS) along with cloud deployment models (public, private and hybrid) to deploy their applications.
DRussell4881 12000070EV 2,293 Views
Come to the first Cloud Foundry Meetup in the Waltham area this coming Wednesday, December 11th!
DRussell4881 12000070EV 2,118 Views
Managing software and product lifecycle integration has always been a challenge and with the rate of the new demands on the enterprise the challenges are increasing. Leaders from different standards organizations and industry will lead interactive discussions on the importance of open technologies to help enterprises manage the lifecycle activities within their environments. Learn about the direction lifecycle integration is taking as a result of the inclusion of open standards and the importance of this work to you. You will also hear how you can bring forward your requirements and influence the supporting work activities.
cynthyap 110000GC4C Tags:  cloud-monitoring cloud-provisioning virtualization cloud-cost-management vmware cloud-computing cloud 3,282 Views
The challenges of virtualized environments are driving the shift to greater integration of service management capabilities such as image and patch management, high-scale provisioning, monitoring, storage and security. Join us for this webcast to learn how organizations can realize the full benefits of virtualization to reduce management costs, decrease deployment time, increase visibility into performance and maximize utilization.If you're in North America, register here for the April 16th session: http://bit.ly/Y1X32g
If you're in Asia Pacific, register for the April 23rd session: http://bit.ly/1632q2Q
cynthyap 110000GC4C Tags:  cloud virtualization cloud-cost-management cloud_computing cloud-computing 3,854 Views
Even though server proliferation can be partially addressed through virtualization, the usage of virtual and physical assets becomes complex to accurately assess or manage. Cost management is crucial to integrate into overall service management, especially with a move into cloud. This webcast discusses how to implement a financial management roadmap and the key requirements for cloud transparency-- the ability to allocate IT costs, usage, and value.
Register today: http://bit.ly/VXXxl3
SteveCurtis 060000QGXC 2,880 Views
As a result of feedback from SmartCloud Enterprise customers and business partners, IBM is rolling out new enhancements this week.*
In addition to the availability of IBM SmartCloud Application Services, IBM’s platform-as-a-service offering, new and enhanced capabilities for IBM SmartCloud Enterprise include:
All the details of each new capability/enhancement can be found on the SCE portal in the “What’s New in SmartCloud Enterprise 2.2” document (SCE account sign-in is required to review the document), but here are a few highlights:
IBM SmartCloud Application Services (SCAS)
IBM’s platform as a service -- IBM SmartCloud Application Services -- runs on top of and deploys virtual resources to IBM SmartCloud Enterprise. SmartCloud Application Services delivers a secure, automated, cloud-based environment that supports the full lifecycle of accelerated application development, deployment and delivery. SCAS provides an enterprise-class infrastructure, enhanced security and pay-per-use, and allows clients to differentiate themselves with built-in flexible options that configure cloud their way – leading to a competitive advantage.
You can find the SmartCloud Application Services offering on the “Service Instance” tab within your SmartCloud Enterprise account.
Windows Instance Capture
As a direct result of client requests, we are offering additional flexibility and choice in Windows instance capture. Clients can now use the “Save private image” function with or without the use of Sysprep, the Microsoft System Preparation tool.
We invite you to learn more about all of these enhancements via the documentation library in the SCE portal and welcome your feedback. Thank you for your continued support!
* IBM will roll out these new capabilities in waves beginning mid-December 2012. IBM’s platform as a service offering, IBM SmartCloud Application Services, can be found in the “Service Instance” tab within your SmartCloud Enterprise account.
cynthyap 110000GC4C Tags:  agile cloud-computing cloud_computing cloud provisioning development devops 3,594 Views
DevOps has become something of a buzzword lately but the idea behind it can be truly powerful. Using a combination of technology and best practices to increase collaboration between development and operations teams can accelerate the application development lifecycle while improving software quality and reducing costs.
cynthyap 110000GC4C Tags:  cloud management image virtualization cloud_computing cloud-computing 3,478 Views
The challenges of managing virtualized environments are mounting. The benefits of virtualization—from cost and labor savings to increased efficiency—are being threatened by its staggering growth and the resultant complexity. A critical piece to solving these challenges, as many organizations have already discovered, is image management. Read more: http://ibm.co/SpHTlV
I've shared my thoughts on building a secure and trusted cloud on thoughtsoncloud.com
Hope you will enjoy the reading and provide your comments. Especially wanted to highlight how can we improve Trust ..
Trust in cloud can be established with the same principles that we use for traditional service management (read my earlier post on Cloud Computing Central for details):
cynthyap 110000GC4C Tags:  virtualization provisioning cloud_computing cloud-computing 2,942 Views
Orchestration can be one of those ambiguous concepts in cloud computing, with varying definitions on when cloud capabilities truly advance into the orchestration realm. Frequently it’s defined simply as automation = orchestration.
But automation is just the starting point for cloud. And as organizations move from managing their virtualized environment, they need to aggregate capabilities for a private cloud to work effectively. The automation of storage, network, performance and provisioning are all aspects handled in most cases by various solutions that have been added on over time as needs increase. Even for organizations that take a transformational approach -- jumping to an advanced cloud to optimize their data centers -- the management of heterogeneous environments with disparate systems can be a challenge not simply addressed by automation alone. As the saying goes, “If you automate a mess, you get an automated mess.”
Read more about how cloud orchestration can simplify and accelerate service delivery.
cynthyap 110000GC4C Tags:  virtualization cloud cloud-computing provisioning cloud_computing 1 Comment 5,231 Views
With the proliferation of cloud computing, many businesses are starting to adopt a service provider model—either as a deliberate strategy to establish new revenue streams or, in some cases, inadvertently to support the growing needs of their organizations. This is especially true for companies with diverse needs, whether they’re tech companies with dev teams churning out new apps and services, or business owners driving requirements for SaaS services and cloud capabilities to enhance their data center operations.
Read more about provisioning and orchestration capabilities to meet growing business needs.
Glad to let the cloud computing central members know that I've also started writing on ThoughtsonCloud - the IBM cloud experts blog. Please read my first post on ThoughtsonCloud -about Maximizing the value of cloud for small and medium enterprises (SMEs). and let me know your comments and feedback. Thanks
Cloud Computing is a term that is often bandied about the web these days and often attributed to different things that -- on the surface -- don't seem to have that much in common. So just what is Cloud Computing? I've heard it called a service, a platform, and even an operating system. Some even link it to such concepts as grid computing -- which is a way of taking many different computers and linking them together to form one very big computer.
A basic definition of cloud computing is the use of the Internet for the tasks you perform on your computer. The "cloud" represents the Internet.
Cloud Computing is a Service
The simplest thing that a computer does is allow us to store and retrieve information. We can store our family photographs, our favorite songs, or even save movies on it. This is also the most basic service offered by cloud computing.
Flickr is a great example of cloud computing as a service. While Flickr started with an emphasis on sharing photos and images, it has emerged as a great place to store those images. In many ways, it is superior to storing the images on your computer.
First, Flickr allows you to easily access your images no matter where you are or what type of device you are using. While you might upload the photos of your vacation to Greece from your home computer, you can easily access them from your laptop while on the road or even from youriPhone while sitting in your local coffee house.
Second, Flickr lets you share the images. There's no need to burn them to a compact disc or save them on a flash drive. You can just send someone your Flickr address.
Third, Flickr provides data security. If you keep your photos on your local computer, what happens if your hard drive crashes? You'd better hope you backed them up to a CD or a flash drive! By uploading the images to Flickr, you are providing yourself with data security by creating a backup on the web. And while it is always best to keep a local copy -- either on your computer, a compact disc or a flash drive -- the truth is that you are far more likely to lose the images you store locally than Flickr is of losing your images.
This is also where grid computing comes into play. Beyond just being used as a place to store and share information, cloud computing can be used to manipulate information. For example, instead of using a local database, businesses could rent CPU time on a web-based database.
The downside? It is not all clear skies and violin music. The major drawback to using cloud computing as a service is that it requires an Internet connection. So, while there are many benefits, you'll lose them off if you are cut off from the Web.
Cloud Computing is a Platform
The web is the operating system of the future. While not exactly true -- we'll always need a local operating system -- this popular saying really means that the web is the next great platform.
What's a platform? It is the basic structure on which applications stand. In other words, it is what runs our apps. Windows is a platform. The Mac OS is a platform. But a platform doesn't have to be an operating system. Java is a platform even though it is not an operating system.
Through cloud computing, the web is becoming a platform. With trends such as Office 2.0, we are seeing more and more applications that were once the province of desktop computers being converted into web applications. Word processors like Buzzword and office suites likeGoogle Docs are slowly becoming as functional as their desktop counterparts and could easily replace software such as Microsoft Office in many homes or small offices.
But cloud computing transcends Office 2.0 to deliver applications of all shapes and sizes fromweb mashups to Facebook applications to web-based massively multiplayer online role-playing games. With new technologies that help web applications store some information locally -- which allows an online word processor to be used offline as well -- and a new browser called Chrome to push the envelope, Google is a major player in turning cloud computing into a platform.
Cloud Computing and Interoperability
A major barrier to cloud computing is the interoperability of applications. While it is possible to insert an Adobe Acrobat file into a Microsoft Word document, things get a little bit stickier when we talk about web-based applications.
This is where some of the most attractive elements to cloud computing -- storing the information on the web and allowing the web to do most of the 'computing' -- becomes a barrier to getting things done. While we might one day be able to insert our Google Docs word processor document into our Google Docs spreadsheet, things are a little stickier when it comes to inserting a Buzzword document into our Google Docs spreadsheet.
Ignoring for a moment that Google probably doesn't want you to have the ability to insert a competitor's document into their spreadsheet, this creates a ton of data security issues. So not only would we need a standard for web 'documents' to become web 'objects' capable of being generically inserted into any other web document, we'll also need a system to maintain a certain level of security when it comes to this type of data sharing.
Possible? Certainly, but it isn't anything that will happen overnight.
What is Cloud Computing?
This brings us back to the initial question. What is cloud computing? It is the process of taking the services and tasks performed by our computers and bringing them to the web.
What does this mean to us?
With the "cloud" doing most of the work, this frees us up to access the "cloud" however we choose. It could be a super-charged desktop PC designed for high-end gaming, or a "thin client" laptop running the Linux operating system with an 8 gig flash drive instead of a conventional hard drive, or even an iPhone or a Blackberry.
We can also get at the same information and perform the same tasks whether we are at work, at home, or even a friend's house. Not that you would want to take a break between rounds of Texas Hold'em to do some work for the office -- but the prospect of being able to do it is pretty cool.
More About Office 2.0
Elsewhere On Web Trends
Now 400 millions research papers are available for peace
solution,but there is no result for the same,unless the messages
posted in the website http://www.goldenduas.com are researched by all
the researchers in the world.Otherwise the world cannot peace and
unity for the following reasons.
Thank you very much joining with me in the interest of public,Safety
and peace in the world.Most of my friends and followers are youngsters
and good educated persons involving peace,Unity and safety amongst all
communities in the world and accordingly we sought support from all of
you to study and analyse the God's messages posted in the website
www.goldenduas.com and same may be advertised all over the world on
the reasons that every person are suffering,due to all kind of
naturalcalamaties in the world.Unless God's messages posted in the
website www.goldenduas.com are followed,otherwise No government and
Scientist can safeguard life and liberity of the public of the all
communities in the world according to Quranic verses 17:16 and
28:59.Internet services in the world and requesting support us to
spread our website messages to each and every corner of the world to
know and discuss by all the internet communities in the world.
Holy Bible says:
1."Behold, I send you forth as sheep in the midst of wolves: be ye
therefore wise as serpents, and harmless as doves".
- Matthew 10:16.
2."Be strong, do not fear; your God will come, he will come with vengeance;
with divine retribution he will come to save you". - Isaiah 35:4
Holy Quran says:
28:59. Nor was thy Lord the one
To destroy a population until
He had sent to its Centre
An apostle, rehearsing to them
Our Signs; nor are We
Going to destroy a population
Except when its members
Our website http:www.goldenduas.com contains more information not only
to avoid all kinds of natural calamities in the world but also to12:15
improve economic growths in business, education, employment, jobs,
health, wealth, security, faith, climate changes (heavy snow,rain,heat
etc),and causes unity and peace all over the world.Our service all
over the world is a non-profitable service to all mankind and animals.
Please check our homepage of the website to know our services.
Otherwise, the public of the world will suffer due to all kind of
natural calamities till the day of resurrection and also they will
fail to improve in economy in
Organizations looking to optimize across the application lifecycle recognize the need for enhanced innovation and speed to market. Yet most IT resources are focused on covering the basics, leaving fewer resources to support business agility. The solution: Platform as a Service (PaaS).
IBM’s PaaS solution, IBM SmartCloud Application Services, or SCAS, allows clients to differentiate themselves with built-in flexible services that allow them build and customize cloud solutions their way – leading to a competitive advantage. Companies are using enterprise-class IBM Application Services to measure and respond to market demands, capture new markets, and reduce application delivery and management costs.
First, with IBM Collaborative Lifecycle Management Service, included within SCAS, development teams can establish shared team development environments in minutes – before it used to take weeks. Within hours they can quickly define their development team and begin working collaboratively to respond to business needs.
Another significant benefit of a PaaS approach is the time it takes to get an application deployed and to market. Application deployment can take weeks on a traditional environment but with IBM SmartCloud Application Services, applications can be deployed to the cloud in minutes.
SCAS also allows clients to respond rapidly to changing market conditions by deploying or modifying cloud-centric (“born on the cloud”) or cloud-enabled (legacy applications) quickly and easily. In fact, developers can move from the dev/test environment directly into production with SCAS, taking advantage of proven repeatable patterns contained within the SmartCloud Application Workload Service, thus eliminating human error. These repeatable patterns allow clients to eradicate errors by avoiding manual processes – this drives consistent results, increases productivity, and reduces risk.
IBM SmartCloud Application Services are compatible with the newly announced IBM PureSystems family. For example, through SmartCloud Application Services clients can rapidly design, develop, and test their dynamic applications on IBM's public cloud and deploy those same application patterns on a private cloud built with PureApplication Systems, or vice versa.
IBM SmartCloud Application Services is now in pilot and accepting new client who want to get ready to accelerate their cloud initiatives. Clients won’t pay for SCAS services during the pilot, but will only be charged for the underlying *SmartCloud Enterprise infrastructure used by the services (that’s because SCAS runs on top of IBM’s Infrastructure as a Service offering, SmartCloud Enterprise, or SCE). Existing SCE customers can get up and running on the pilot quickly and start realizing the benefits of PaaS right away.
To be considered for the program, new or existing SCE customers should IBM SmartCloud Application Services web site and click the button on the right titled, “Get a jump on the competition with the SmartCloud Application Services pilot program.”
You can learn more about IBM SmartCloud Application Services with this video, “The multifaceted potential of platform as a service (PaaS) from IBM.”
CLD Partners, a leading provider of IT consulting services with a particular focus on cloud computing, began using SCAS during the beta which launched in 2011 and has now transitioned into the pilot program.
“We share IBM’s vision for how enterprise customers can achieve huge productivity gains by embracing cloud technologies. SCAS allowed us to utilize world class software in a managed environment that greatly reduced the complexity of the deployment while also providing for future scalability that our customers only pay for when they need it,” said Steve Clune, Founder and CEO of CLD Partners. “Ultimately, traditional infrastructure planning and configuration that would have required weeks was literally reduced to hours. And future flexibility as infrastructure needs change is virtually limitless.”
IT Operations, Independent Software Vendors (ISVs), Line of Business, and Application Developers would benefit from the SCAS pilot program. And it doesn’t matter the company size, enterprise or mid-market; all types of businesses can realize value from getting their applications to market faster.
To learn more about the IBM SmartCloud Application Services pilot program, read the Pilot Services Bulletin or visit the Application Services web site.
One of the exciting and valuable characteristics of IBM SmartCloud Enterprise is it's tight linkage with the IBM Software Group portfolio of offerings. In addition to the offerings from IBM Software Group, innovative software vendors are making exciting offerings available as well. There is an ever-growing list of offerings available to IBM SmartCloud Enterprise customers. These recent additions are now in the SmartCloud Enterprise public catalog and available to you to use.
BYOL - Bring Your Own License; PAYG - Pay As You Go
The following BPM images are now available in the catalog:
IBM Process Center Advanced 7.5.1 64b - BYOL
IBM WebSphere Service Registry and Repository (WSRR) is a system for storing, accessing and managing information, commonly referred as service metadata, used in the selection, invocation, management, governance and reuse of services in a successful Service Oriented Architecture (SOA). In other words, it is where you store information about services in your systems, or in other organizations' systems, that you already use, plan to use, or want to be aware of.
The following WSRR images are now available in the catalog:
IBM WebSphere Service Registry 64bit BYOL
IBM WebSphere Message Broker (WMB) delivers an advanced Enterprise Service Bus (ESB) that provides connectivity and universal data transformation for both standard and non-standards-based applications and services to power your SOA.
The following WMB images are now available in the catalog:
IBM WebSphere Message Broker 220.127.116.11 64b BYOL
IBM SPSS Decision Management enables business users to automatically deliver high-volume, optimized decisions at the point of impact to achieve superior results.
The following SPSS image is now available in the catalog
IBM SPSS Decision Management 6.2 64b BYOL
From our partner Riverbed comes Riverbed® Stingray™. This software-based application delivery controller (ADC) designed to deliver faster and more reliable access to public web sites and private applications.
The following Riverbed Stingray images are now available in the catalog:
Riverbed Stingray V 8.0 RHEL 6 32 bit BYOL
Additionally, Alphinat SmartGuide provides visual, drag and drop tools that can help you quickly build interactive web dialogues that guide people to the relevant response, help them diagnose problems or lead them through a series of well-defined steps that make it easy to complete complex—or infrequently performed—tasks.
The following Alphinat SmartGuide images are now available in the catalog:
Alphinat SmartGuide 5.1.3 SLES 11 SP1 32-bit PAYG
GridRobotics' Cloud Lab Grid Automation Server can manage any number of client or agent computers, which can be spun up automatically on public clouds like IBM SCE or private clouds. Grid Robotics’ Cloud Lab Classroom is a virtual classroom management solution.
The following GridRobotics Cloud Lab images are now available in the catalog:
GridRobotics Cloud Lab Grid Automation Base Server 1.4 32b R2 - BYOL
We keep a list of our partners on our Cloud ecosystem partner images page
We are committed to adding value continuously to IBM SmartCloud Enterprise to help you advance cloud in your organization.
Securing the Virtual Infrastructure
Cloud computing tests the limit of security operations and infrastructure from various perspectives. Let us examine what is different about Cloud Security and identify what are existing threats and what are the new areas that we should be concerned about.
Figure 2 Cloud Security - Existing & New Threats
I think what make cloud security complex is the number of layers involved in the cloud service stack and the number of components in each layers. So it means
· Increased infrastructure layers to manage and protect
· Multiple operating systems and applications per server
More Components = More Exposure
As we can see we already do perimeter protection at the network and operating systems as well as do physical and personnel security for the traditional infrastructure. All of them holds good for cloud as well to combat the existing threats at these layers.
us examine what are the new points of exposure with cloud. Security and resiliency complexities are raised
by virtualization and automation which are essentials to cloud. The new risks
· Cloud Service Management Vulnerabilities
· Secure storage of VMs and the management data
· Managing identities on the increasing number of virtual assets
· Stealth rootkits in hardware now possible
· Virtual NICs & Virtual Hardware are targets
· Virtual sprawl, VM stealing
· Dynamic relocation of VMs
· Elimination of physical boundaries between systems
· Manually tracking software and configurations of VMs
For managing these additional complexities, you need a reference model that is comprehensive and covers security controls that can combat not only the existing challenges but also the new challenges that cloud brings in.
IBM Foundational Security controls for IBM cloud reference model (see below) provides the different elements and controls required to build a secure cloud.
Figure 1 Foundation Security Controls for IBM Cloud Reference Model
Managing datacenter identities (Identity and access Management) is one of the top-most security concerns and we discussed how to handle the same in my previous post. I’ll discuss how to handle the virtualization related threats in my next post.
Meanwhile let me know your comments on this reference model. Do you think these set of controls are comprehensive. Do you see any areas not covered from a cloud security perspective? If so, just add it as comment to this post and let us discuss.
Tracy of IBM Systems Events 2700003TG1 2,373 Views
Rethink IT. Reinvent Business.
Join us for the 2012 IBMSmartCloud Symposium event on 16-19 April 2012 in San Francisco, California. This Symposium will help you Rethink IT and Reinvent Business.
This event will introduce Cloud Computing’s disruptive potential to not only reduce cost and complexity but reinvent the way we do business. Over the course of four days, there will be sessions that define cloud computing and discuss transformative benefits and challenges to consider while sharing specific, proven patterns of success. We will provide proven methods to get started on the Cloud journey from the up-front investments to capacity planning. This event will cover the technology behind private and public clouds whether you choose to build your own, leverage prepackaged solutions or have it delivered as a service.
Sessions will explore challenges and solutions for securing, virtualization and performance of mission critical applications as well as automating service delivery processes for cloud environments. We will help you: design, deploy and consume.
Use promotion code A2N for 10% off enrollment!
Managing Datacenter Identities for Cloud
Among top challenges for cloud , I discussed Security as the top concern. I also detailed the top concerns with regard to securing the cloud in the subsequent post. Cloud computing tests the limits of security operations and infrastructure for the various security and privacy domains
Cloud brings in lot of additional considerations like multi-tenancy, data separation, virtualization etc. In a cloud environment, access expands, responsibilities change, control shifts, and the speed of provisioning resources and applications increases - greatly affecting all aspects of IT security. We will discuss the different security aspects classifying them against specific adoption patterns (see post here). The cloud enabled data center pattern is the more predominant one which has Infrastructure and Identity management as the top concerns. Within cloud security doing the right design for the infrastructure security is the important aspect – the details of which and how it is done by different public clouds we discussed in the previous post. Now with regard to Identity lets discuss the top requirements, use cases and look at what solutions that we can provide to make the cloud secure. Lets start with managing datacenter identities which is the top concern.
Managing Datacenter Identities
Identity and Access Control needs to deliver capability that can be used to provide role based access to securely connect users to the cloud. The users include the cloud service provider as well as consumer roles. Within each user groups we need to support User as well as Administrator Roles. The identity and access management should the 4As - Authentication, Authorization, Auditing and Assurance.
§ For a cloud consumer user, it is about making sure the user identity is verified and authenticated at the self service portal and providing right access to the resource pools.
§ For the administrator, we need to provide role based access to Service Lifecycle Management functions
§ We will need to integrate with existing User Directory infrastructure (AD/LDAP/NIS) to extend the user identity to the cloud environment as well.
§ Once in the cloud environment, we need to automatically manage access to the cloud resources, through provisioning and de-provision of resource profiles and users against the resources in the cloud identity and access management systems. Manual processes to manage accounts for users on various virtual systems and applications are not going to scale in a cloud environment. The same is true with the manual processes to process various audit logs to meet compliance and audit requirements
§ In massively parallel, cloud-computing infrastructures involves enormous pools of external users as well. We need to ensure smooth user experience for the users so that they don’t need to enter their credentials multiple times to access various applications hosted within the enterprise or by business partners and Cloud providers.
§ Management of user identities and access rights across hosted, private and hybrid clouds for internal Enterpise users is also a major challenge that includes
o Centralized user access management to on and off-premise applications and services
o Enables Federated Single Sign-on and Identity Mediation across different service providers
Lets look at some of the capabilities that we can leverage to solution these requiremnts.
IBM Security Identity and Access Assurance - provides the following capabilities. These capabilities enable clients to reduce costs, improve user productivity, strengthen access control, and support compliance initiatives.
Sreek Iyer 2000001K7N Tags:  cloud-computing security cloud chapter26 cloudsecurity stepbystep ibmcloud 1 Comment 6,736 Views
Infrastructure Security Design (Public Clouds)
As we discussed in my previous post, transparency or more control is need of the hour with regards to security on the cloud. Let examine how this is done by the popular cloud providers and understand the method and the technologies. We need to secure the infrastructure, network, endpoints, applications, processes, data, and information and overall have a governance to mitigate the risk and meet the compliance. Let us take the infrastructure to begin with.
The key areas for a security team to design for with regards to infrastructure security are
Let us start looking at the public cloud implementations to understand how they are managing these aspects.
Almost all the vendors – IBM, Amazon, Microsoft, Salesforce provide a means to do SSH with keys to the Guest OS. The protocol runs over SSL and is authenticated with a certificate and private key which could be generated by the customer.
IBM LotusLive employs a security approach based on three three-pillars that includes ensuring security rich infrastructure.
We will see how the infrastructure security aspects are dealt with for private clouds in my next post. Stay tuned and keep those comments coming. I’d some of my readers tell me that the blog entries are not showing up fine on Internet explorer. While I will make the effort to fix the issue, please use Firefox or any other browser in the meantime.
And if you these posts interesting dont forget to rate the post (click on the stars) and if you got an extra minute do put in a comment on what apsects you find interesting or need discussion.
Sreek Iyer 2000001K7N Tags:  chapter25 stepbystep cloud-computing ibm security cloud 1 Comment 6,004 Views
Securing the Cloud – What are the top concerns?
IT Security is well researched and matured area. The reason why we have enterprises doing commerce over the web today is because IT Security practices, tools and technologies have matured to establish the trust and have overcome the concerns. As with most new technology paradigms, security concerns surrounding cloud computing have become the most widely talked about inhibitor of widespread usage as discussed in my previous post.
To gain the trust of organizations, cloud services must deliver security and privacy expectations that meet or exceed what is available in traditional IT environments. Let us discuss what’s are the Top Security Concerns when it comes to cloud.
Transparency or Less Control
If we look at the security and privacy domains in cloud, they are no different from the traditional domains. We need to secure the infrastructure, network, endpoints, applications, processes, data, and information and overall have a governance to mitigate the risk and meet the compliance. But in a cloud environment, access expands, responsibilities change, control shifts, and the speed of provisioning resources and applications increases - greatly affecting all these aspects of IT security. The different cloud deployment models like the public, private and hybrid clouds also change the way we think need to about security. The responsibilities are spread across Consumer, Service Resellers and Providers. The immediate risks of these shared responsibility is that nobody gets a holistic view of the security and so less customization of any security controls. Consumers need visibility into day-to-day operations as well as need access to logs and policies. The aspect of less visibility or transparency is mostly the top most concern shared universally.
Data and Information Security
The next primary concern that customers mention related to security on the cloud is related to data and information security. The specific concerns include
§ Protection of intellectual property and data
§ Ability to enforce regulatory or contractual obligations
§ Unauthorized use of data
§ Confidentiality of data
§ Availability of data
§ Integrity of data
A shared, multi-tenant infrastructure increases potential for unauthorized exposure especially in the case of public-facing clouds. Security Administrators need to worry about designing security for applications and data that are publically exposed which can be potentially accessed by anybody on the internet.
Different industries and geographies have different regulations and rules that they need to comply to depending on the workloads and data they put on the cloud. Complying with SOX, HIPAA and other regulations are one risk or issue because of which customers are not ready to put their applications on the cloud. Cloud or no cloud for these sort of workloads comprehensive auditing capabilities are essential.
Security Management - Methods and Tools
Finally customers would need to know how today’s enterprise security controls are represented in the cloud. They need to understand how the security events are monitored correlated and actions taken when needed to keep their infrastructure, workload and data safe. Security coming on the way of high availability is another key concern. IT departments worry about a loss of service should outages occur because of security reasons. If so, when running mission critical applications how soon you can get the environment back at the same level of security is the priority.
Until all of these concerns are addressed and without strong availability guarantees, customers may not be ready to run their apps in the cloud. But things are not that bad as we might think. We will discuss how these aspects can be addressed and what tools and technologies to put to use in the subsequent posts.
Meanwhile I recommend that you read this very interesting whitepaper on “Cloud Security Who do you trust?” which discusses all of these aspects in detail as well as the different security challenges that security introduces.
Cloud Security – The top most concern and Opportunity
First of all, wishing all my readers a
very happy and prosperous year 2012 ahead.
Few things happened towards the end
of the year which was significant to me. IBM acquired Q1 Labs to Drive Greater Security Intelligence and created a New Security Division. I also joined this
newly formed IBM Security Systems team last quarter as a solution architect for cloud security. This is a great time to be looking at cloud security. Happy to be on this new role where I can provide solution to customers to handle their cloud security concerns and make it easy for them to adopt cloud and innovate at a faster rate than before.
In my previous
post, we discussed security as the top most concern why customers and
enterprises are not adopting cloud. As
part of year’s posts, I plan to discuss the various security issues and aspects
of cloud computing.
We will explore to understand what are the unique challenges with Cloud Security and discuss what aspects is important for each customer adoption pattern that we have seen.
We will also learn how the IBM Security Framework can be used to address the various security challenges namely
· Security governance, risk management and compliance
· People and Identity
· Data and information
· Application and process
· Network, server and endpoint
· Physical infrastructure
Looking forward to your comments and inputs in this journey of understanding the security requirements for cloud and how we can overcome this major challenge to cloud adoption using the World’s Most Comprehensive Security Portfolio – IBM Security Systems. I’ll try and elaborate the IBM Point of View on cloud security and discuss the architectural model to address the security requirements for cloud. Stay tuned and keep those comments and inputs coming.
cynthyap 110000GC4C Tags:  automation image provisioning management computing cloud-computing cloud 3,682 Views
With the barrage of cloud news constantly hitting the market, it can be challenging for organizations to differentiate between all of the solutions and capabilities out there.
But with the latest cloud offering from IBM, the value proposition is quite simple—you get a low-cost, low-risk entry to cloud computing with compelling features. This is especially important for organizations who are still trying to leverage the cost savings of virtualization.
Our customers have told us they’re looking to cloud computing to increase agility—the ability of IT to evolve and meet business needs—and they’re looking for ways to control expenses related to IT investments. They also want to reduce IT complexity while at the same time increase utilization, reliability and scalability of IT resources. And they are looking for the ability to expand capabilities gradually, as their needs change and grow.
In designing a solution to meet all of these needs, we developed IBM SmartCloud Provisioning. Using industry best practices for cloud deployment and management, this new solution allows organizations to quickly deploy cloud resources with automated provisioning, parallel scalability and integrated fault tolerance to increase operational efficiency and respond to user needs.
The name doesn’t tell the whole story though. IBM SmartCloud Provisioning is a full-featured solution wrapped up in an easy-to-implement package. That means you get:
· Rapidly scalable deployment designed to meet business growth
· Reliable, non-stop cloud capable of automatically tolerating and recovering from software and hardware failures
· Reduced complexity through ease of use and improve time to value
· Reduced IT labor resources with self-service requesting and highly automated operations
· Control over image sprawl and reduced business risk through rich analytics, image versioning and federated image library features
Using this technology, we’ve seen customers get a cloud up and running in just hours—realizing immediate time to value. It’s fast—administrators have been able to go from bare metal to ready-for-work in under five minutes, or start a single VM and load OS in under 10 seconds, or scale up to 50,000 VMs in an hour (50 nodes).
But ultimately, these IT benefits have translated to business benefits—customers have been able to see how cloud computing can impact their business, and how they can accelerate the delivery of new services to drive revenue.
Sreek Iyer 2000001K7N Tags:  water cloud cloud-computing mullaperiayar water-management 6,722 Views
Possible Solution for Mullaperiyar Dam Issue ?
While I’m writing this blog, the Ministers of Tamil Nadu and Kerala are having a meeting with Prime Minister to discuss the contentious issue of Mullaperiayar at length. For those who don’t know about this issue, this is about the Mullaperiayar Dam in south India. Mullaperiyar Dam is a masonry gravity dam over River Periyar and operated by the Government of Tamil Nadu based on a 999-year lease agreement. The catchment areas and river basin of River Periyar downstream include five Districts of Central Kerala, namely Idukki, Kottayam, Ernakulam, Alappuzha and Trissur with a total population of around 3.5 million.
This dam is at the centre stage again in the wake of reports that the dam is weakening due to increase in incidents of tremor in Idduki district in Kerala. Ministers from Kerala are seeking Central Government intervention in ensuring the safety of the dam. At the same time, Tamil Nadu is insisting on increasing the water level in the reservoir for enhancing water supply to the state. While Tamil Nadu wants to increase the water-level in the reservoir, Kerala has been insisting that it be reduced from the current 136 feet to 120 feet.
Currently I don’t think we have clear metrics on the exact usage of water by each state, what is right level of water to be retained by the dam, what are the risks etc. We have been relying on data that we have from the past.
However you look at it -- whether too much or not enough, the world needs a smarter way to think about water. We need to look at the subject holistically with all the other considerations as well. We use water for more than drinking. We need to make an inventory of how much water we get and how is it used – of industries, irrigation, etc. This is where I think we need smarter ways to manage the water in the best possible way that addresses both states requirements adequately.
IBM Smarter Water Management can help us think in a smarter way about water. For instance IBM is helping the Beacon Institute to do source-to-sea real-time monitoring network for New York’s Hudson and St. Lawrence Rivers as well as report on conditions and threats in real time. There are many other case studies across the globe on IBM Smarter Water Management.
Those interested in the problem and the possible solutions should
definitely read IBM’s broader outlook on Water Management as covered in the Global Innovation Outlook.
Rivers for Tomorrow is another interesting partnership between IBM and The Nature Conservancy. IBM is providing a state-of-the-art support system for a free, online application that will provide easy access to data and computer models to help watershed managers assess how land use affects water quality.
Though it's a worldwide entity, water is treated as a regional issue. I think we should try putting technology to use to solve our water problems. The solution should be more instrumented, interconnected and intelligent system that can not only take into consideration the realtime monitoring of the river but also include early warning systems to notify risks related to earth quakes etc. IBM’s Strategic Water Management Solutions include offerings to help governments, water utilities, and companies monitor and manage water more effectively. The IBM Strategic Water Information Management (SWIM) solutions platform is both an information architecture and an intelligent infrastructure that enables continuous automated sensing, monitoring, and decision support for water management operations.
And you might be wondering what has this to do with Cloud and why is this post on cloud computing Central. For these solutions and platforms to be successful it is highly important that we have energy efficient high-performance computing platforms and complex sensor, metering, and actuator networks. Such platform needs and flexible choices of having the solution on-premise as well as leverage different delivery models can only be supported through a cloud.
I think we should just leverage these solutions on the cloud to solve this issue and keep all the states and its people happy :-).
Sreek Iyer 2000001K7N Tags:  cloud challenges cloud-computing stepbystep 1 Comment 23,327 Views
Top 5 Challenges to Cloud Computing
In my previous post, we looked at understanding the different adoption patterns – i.e. how customers are turning towards cloud. Some of the key reasons of the “why” are listed below
While all of these are good, there are still many yet to get on to this cloud computing train. Let’s explore what are their key concerns or challenges why they are reluctant to jump in. The following are inputs that I’ve got from various analyst studies and resources on the internet.
I plan to discuss more on what are the perceived and real threats related to Security and Privacy in my subsequent posts. In my new role, as an Architect for IBM Security Solutions, I’ll like to discuss the details on what IBM tools and technologies you could use to overcome the issues.
Meanwhile keep those comments coming and I look forward to them to understand what other areas you think are key concerns to be addressed to accelerate adoption of cloud.
IBM Tech Trends Report says during the next 2 years 75% of organizations will engage in cloud computing
RHyman 06000032P4 Tags:  it it-pros #ibmcloud clouds tech-trends developerworks cloud social cloud-computing mobile #techtrends professionals tech trends techtrends developers analytics survey cloudcomputing 9,358 Views
The IBM Tech Trends report is out! We asked, you answered. Check out the results of IBM developerWorks' 2011 Tech Trends survey and find out what more than 4,000 IT professionals -- your peers -- have to say about the future of technology, including their opinions on cloud computing, business analytics, mobile computing, and social business.
The report provides insight from the worldwide IT development community into the adoption, preferences and challenges of key enterprise technology trends including cloud, business analytics, mobile computing, and social business. The results also provide guidance on areas where IT professionals like you say they need help with skills to develop new technologies and platforms that will be in demand in the coming years.
As we focus in on cloud, there is absolutely a growing trend in cloud computing to view it as more than just cheap infrastructure. Companies are now exploring the possibility of developing applications in the cloud (you guys are already doing that) many of them related to mobile development.
Currently the biggest challenge is integrating the cloud into application development as the reduction of operating expenses is the driver of this move. We still have a way to go however with 40% of the survey responders saying their company is not yet involved in cloud currently. Hmm, interesting right.
The cool news is that the expectation from those same responders is that over the next two years 75% of the IT professionals responded that they expect that this will change and that theirs and other enterprises will take to building cloud infrastructure.
Understanding the Cloud Adoption Patterns
I did discuss the - The Next Big thing – Cloud enabled business model Innovation in my previous post. But you may be asking where do I start. That’s where I guess Cloud Adoption Patterns work that IBM has pioneered is going to help. This is some great analysis - Cloud Adoption Patterns that IBM have done based on thousands of cloud engagements that we have done so far. This analysis is a good abstraction of the ways organizations are consuming cloud -- a good starting /entry point discussions on cloud.
The four most common entry points to cloud solutions are discussed in the picture above. I love these videos on youtube - Cloud Adoption Patterns that tells you the essence of these patterns in less than 2 minutes.
· Cloud-Enabled Data Center – to achieve better return on investment and manage complexity by extending virtualization well beyond just hardware consolidation.
· Cloud Platform Services – to accelerate time-to-market by creating, deploying and managing cloud applications.
· Business Solutions on Cloud – to access enterprise-level capabilities through a provider’s applications running on a cloud infrastructure; to improve innovation and flexibility while minimizing risk and capital expense.
· Cloud Service Provider – to innovate with new business models by building, extending, enabling and marketing cloud services.
For each of these patterns of cloud adoption, we have defined a set of proven projects that it supports with software, services and solutions to help businesses streamline the implementation of their chosen cloud capabilities.
While the Cloud Enabled Data Center pattern is the case for most of the private cloud implementation. Most customers start with providing infrastructure as a service on the cloud. This pattern also discusses how we can share infrastructure across multiple projects and drive benefits. This also discusses a lot of automation in the operation and business process that’s possible to have a responsive IT department that can help the business to be agile.
The next level of gain or reuse would be run your workloads on a shared stack of middleware. Platform as a Service Pattern is an integrated stack of middleware that is optimized to execute and manage different workloads, for example, batch, business process management and analytics. This middleware stack standardizes and automates a common set of topologies and workloads, providing businesses with elasticity, efficiency and automated workload management. A cloud platform dynamically adjusts workload and infrastructure characteristics to meet business priorities and service level agreements. All the layers below understanding what workloads are running on top of it and optimizing self is going to help run these workloads more efficiently and at a lower cost. The Cloud Platform Services adoption pattern can improve developer productivity by eliminating the need to work at the image level so that developers can instead concentrate on application development.
Business solutions pattern maps to the SAAS model where you leverage cloud to innovate with speed and efficiency to drive sales and profitability. In these we look at creating and consuming business solutions on the cloud. Some of the key offerings in this space are things like business process design, social and collaboration tools, supply chain and inventory, digital marketing optimization, B2B integration Services etc. These generic services consumed from the cloud relieves you of the pain of setting up things from scratch as well as enable you to scale based on your demands.
The Cloud Service Provider (CSP) Pattern is the one that most of the Telcos adopt when they have to service multiple consumers with a single cloud solution. We provide tools and technologies to design and deploy highly secure, multi-tenant cloud services infrastructure that can integrate nicely with plenty of 3rd party applications.
As we understand it is easy to do the IaaS pattern and more work to do when we implement SaaS or CSP patterns. But the gain is more when we do sharing at the software or application level. Depending on where you are in your current IT Environment, you can pick up and implement any of these patterns that suit you. The work that we have done to analyse these patterns and provide a consistent set of technologies and tools to build out these patterns should make life easy for you. Leverage it –less pain and more to gain.
There's still time to sign up for the IBM webcast: Managing the Cloud – Best practices for cloud service management
The Next Big thing – Cloud enabled business model Innovation
I remember the day when one of our Executives - Nick Donofrio visited us in India. He is like the chief mentor for all the members of the IBM technical community and he has seen IBM and the IT industry for many years. He was addressing a Technical Exchange event few years ago and then someone in the audience asked him this question – “Sir , you have seen technology for so many years now – can you tell us what’s going to be the next big thing in terms of invention/innovation”. Everyone was all ears waiting for the answer - is it the next version of the internet, the search, a web2.0 application or may be an intelligent mobile app. But his answer was that he believes that there is not going to be any next big thing in technology. The next big thing for all of us is going to be Business model innovation. Even today his statements holds very true. Businesses that are able to reinvent their business model are succeeding and managing to stay on top and others vanish from the scene.
There are lots of innovative and technical things happening all around us like
I believe the next big thing is going to be how well you can use all these elements for business seamlessly and cost effectively. The key to succeed is to use technology to do this business model innovation and do it faster.
How do you do it faster ?-- The answer is cloud. This is something that I’m saying based on the data that IBM has got analyzing over 2000 customers cloud adoption patterns. All of them have seen the below advantages with Cloud.
Considering all these factors, I think the next big thing is Cloud Enabled Business Model Innovation. I was able to relate with some of the latest announcements that we have made in the cloud easily because they are just restating my same belief. As discussed in this interesting video by IBM's Saul Berman (Innovation & Growth Leader), 60% of the customers that IBM interviewed is saying they would consider cloud immediately and 70% of the them intend to use cloud to enable business model innovation. Based on the rate at which they adopt the new technologies they may be an Optimizer (looking at improving existing model), Innovator (looking at new model) or a Disruptor (who is ready to bring in game changing ideas).
So as today’s IT leaders, let us broaden our focus from merely delivering technology to solving larger business issues. One great opportunity for that is to tune in or be present for the SWG Universe India 2011. You will get a chance to listen to some great speakers who will talk about how to use cloud for business model innovation.
Cloud enabled Business Model Innovation I feel is the next big thing that could change IT and Businesses. – So come let’s Rethink IT & Reinvent Business
cynthyap 110000GC4C Tags:  provisioning cloud service cloud_computing management virtualization 3,088 Views
Today IBM announced new SmartCloud Foundation capabilities to help organizations realize the potential of cloud computing. Watch the replay of the IBM SmartCloud launch webcast, to learn more about how the new announcements, including IBM SmartCloud Provisioning (delivered by IBM Service Agility Accelerator for Cloud), can help customers move beyond virtualization to more advanced cloud deployments.
In order for me to be responsive to your reading interests and learning needs, I thought I'll take a short feedback that will help me understand your reactions to my blog. Request your response by taking this short survey. This should not take more than
You can see all the blog entries in this category by clicking on the tag "stepbystep" If you liked any entry in the blog, please rate it by clicking on the "star" or feel free to provide your comments and inputs through this feedback form.
You can access the feedback form here.
Look forward to your comments and inputs.
Sreek Iyer 2000001K7N Tags:  cloud_computing tivoliindia ibmswuin ibmindia cloud stepbystep 3,805 Views
I've been writing about the step by step approach to Cloud till now. The rate at which I see cloud computing being adopted inside and outside the Enterprise, I think we really need to get out of our step-by-step approach and start riding the wave. IBM has implemented may be over 2000 cloud engagements in the last year and are managing over 1 million virtual machines today. We have identified the customer cloud adoption patterns and entry points to cloud and have lots of lessons learnt and experience to share. So won’t it be nice if we could talk to you about the things as well as share the best practices with you. All of it is difficult to discuss through a blog. So You have a better option – The IBM Software Universe 2011 – The Next Big Wave.
Yes, the 7th edition of IBM India’s largest annual software conclave is happening this year Oct 19th and Oct 20th. I believe it would be time well spent to learn from our learnings and accelerate your adoption of cloud. We have some interesting sessions on Private Cloud [R]Evolution which will discuss some of the key trends and technologies to look at for building the cloud insider your firewall. If you are looking to understand how to expand your existing Data Center capabilities to have better visibility, control and automation across your physical and virtual environments then “Integrated Service Management – Thinking Beyond the Data Center” is a must attend session. If you are one of those business or Enterprise IT Manager who is looking to start with the cloud – you don’t want to miss the “Get Your Head in the Cloud” session which can tell you how you could get some of your collaboration requirements from the cloud.
Finally it is wonderful opportunity for you to talk to some of the Distinguished Engineers and IBM Fellows who can spend 1:1 time with you to listen about your issues/problems as well as discuss the future roadmap. For instance, Bala Rajaraman who is the Distinguished Engineer with responsibilities including the architecture and design for Cloud & Service Management solutions is going to be in India and it is your opportunity to catch up with Bala.
Last but not the least, there is going to be Solution Expos that will be setup for you, so you have a opportunity to touch and feel the cloud solutions. This should include industry specific demos and technology/product demos from IBM as well as partners.
So be there on Oct 19, 20th at the IBM Software Universe 2011. It is going to teach you a new skill – How to ride the next big wave… the cloud wave..
cynthyap 110000GC4C Tags:  management virtualization service managing cloud monitoring cloud-computing 3,501 Views
Join us for the Managing the Cloud Webcast series to learn more about best practices, technical approaches and capabilities to help solve your business and technical challenges in the cloud. Sign up for these free 1 hour webcasts today.
Best practices for cloud service management - Nov 8, 12-1EST
Organizations today are looking to cloud computing to deliver cost savings and faster service delivery. However, most organizations are still struggling to have the basic IT infrastructure that is necessary to take the leap to a robust cloud. This session will explain how service management can help provide the essentials to maintain service levels in the cloud and best practices based on IBM's work with customers. This information will provide the foundation for building and managing a cloud to meet your business objectives and transform IT.https://www14.software.ibm.com/webapp/iwm/web/signup.do?source=swg-tivoli-nov8managingcloud Performance management in the cloud - Nov 15, 12-1EST Cloud services can leverage everything from databases to mainframe transactions to SOA services, so the ability to see how all these different touch points are performing is critical. See how integrated service management can provide the capabilities you need to monitor and manage today's cloud based services and help you meet your service level goals.https://www14.software.ibm.com/webapp/iwm/web/signup.do?source=swg-tivoli-nov15managingcloud
Performance management in the cloud - Nov 15, 12-1EST
Cloud services can leverage everything from databases to mainframe transactions to SOA services, so the ability to see how all these different touch points are performing is critical. See how integrated service management can provide the capabilities you need to monitor and manage today's cloud based services and help you meet your service level goals.https://www14.software.ibm.com/webapp/iwm/web/signup.do?source=swg-tivoli-nov15managingcloud
Chapter 19 – Tivoli Process Automation Engine
As we discussed in the previous post, it is important that the all the processes work together to bring successful automation in the cloud management platform. A process workflow automation engine is what makes this possible. In this chapter we will discuss more about Tivoli process automation engine that’s form the base for IBM process automation in the cloud space.
process automation engine provides a user interface, configuration services, workflows and the common data system needed
for IBM Service Management products and other services. As we already know IBM
Service Management (ISM) is a comprehensive and integrated approach for
Service Management, integrating technology, information, processes, and people
to deliver service excellence and operational efficiency and effectiveness for
traditional enterprises, service providers, and mid-size companies. Tivoli process automation engine, previously known as Tivoli base services, provides
the base infrastructure for applications like Tivoli Maximo Asset Management,
Change and Configuration Manager Database (CCMDB), Tivoli Service Request
Manager (SRM), Tivoli Asset Management for IT (TAMIT), Tivoli Proivisioning
Manager as well as Tivoli Service Automation Manager. Any product that has the Tivoli process automation engine as its foundation can be
installed with any other product that has the Tivoli process automation engine.
IBM Service Management (ISM) comprises of
Through having a common process automation engine, the we can successfully link Operational and Business services with Infrastructure through a single (J2EE) platform. We can also leverage current investments through linking this engine with existing process automation technologies and products. So by building a unified platform to automate processes, we have taken data integration to the next level where sharing data between applications has never been easier. This integrated process automation platform can support the repeatable IT functions like Incident Management, Problem Management, Change Management, Configuration Management all the way through to Release Management. All of these processes tie into the CMDB where they share consistent data via bidirectional integration. The platform supports best practices such as ITIL and other Industry best practices. This facilitates an automated approach across the IT management lifecycle. It's also forms the basis for automating repetitive tasks that can be handled by the system instead of requiring human (costly) intervention. TPAE through the adapters provide data federation from multiple sources that you already have and translating the information into usable data that can be leveraged by internal process and workflow.
Figure 1 Tivoli process automation integrated portfolio
The Tivoli Process Automation Engine Wiki provides details on each of the components and capabilities that make up this integrated portfolio.
The Certification Study Guide Series : Foundations of Tivoli Process Automation Engine is a IBM® Redbooks publication that can guide you to get an IBM Professional Certification on Tivoli Process Automation Engine.
Brocade and Avnet Technology Solutions Bring Simplified Server and Desktop Virtualization Solutions to the Channel Through the Brocade CloudPlex Architecture
JeffHebert 060001UEQ2 Tags:  enterprise paas cloud switching saas emerging network iaas storage technology 3,168 Views
JeffHebert 060001UEQ2 Tags:  switching enterprise saas cloud networking paas iaas storage 3,316 Views
In a cloud service provider environment, there are various business processes and compliance that needs to be addressed before the environment can go live / operational. The following are the areas that need to be designed are the following:
IBM's strategy differentiates from other vendors in that it is focused on bridging business and IT processes using a common software framework with common services, including process automation and security services. IBM Service Management is built on the Tivoli Service Management Platform and wrapped with best practices, methodologies and services, to help you deliver services to your customers effectively and efficiently.
We provide an Integrated Solution that represents the full management of data, processes, tooling and people. The key differentiator is a common data model that all the core solutions can share for simple data sharing. It is important that the all the processes work together. A process workflow automation engine is what makes this possible. We will discuss more about this common workflow process automation engine in the next post.
FleetCor Selects Brocade to Provide Cloud-Optimized Network Services for 500,000 Commercial Accounts
Leading Fuel Card Provider Values Brocade Market Leadership, Reliability and Network Security
SAN JOSE, CA -- (MARKET WIRE) -- 07/19/11 -- Brocade (NASDAQ: BRCD) today announced that FleetCor, a leading independent global provider of specialized payment products and services to businesses, commercial fleets, major oil companies, petroleum marketers and government fleets, has selected Brocade as the vendor to build its cloud-optimized network. This new network enhances FleetCor's ability to securely process millions of transactions monthly and ultimately better serve its commercial accounts in 18 countries in North America, Europe, Africa and Asia.
Millions of commercial payment cards are in the hands of FleetCor cardholders worldwide, and they are used to purchase billions of gallons of fuel per year. Given this volume of network-based transactions, network reliability, scalability and security were critical factors for FleetCor to consider in its selection process to maintain superior customer satisfaction.
In addition, FleetCor selected Brocade as its networking expert to help evolve its data center and IT operations into a more agile private cloud infrastructure. Brocade® cloud-optimized networks are designed to reduce network complexity while increasing performance and reliability. Brocade solutions for private cloud networking are purpose-built to support highly virtualized data centers.
"When we evaluated networking vendors to build our private cloud, we looked at market leadership and non-stop access to critical data," said Waddaah Keirbeck, senior vice president global IT, FleetCor. "Brocade cloud-optimized networking solutions are perfect for our data centers because they allow us to optimize applications faster, virtually eliminate downtime and help us meet service level agreements for our customers. Moving to a cloud-based model also provides us the flexibility to make adjustments on the fly and access secure information virtually anywhere and anytime."
FleetCor installed a Brocade MLXe router for each of its three data centers, citing scalability as a major driver for the purchase. This approach enables FleetCor to virtualize its geographically distributed data centers and leverage the equipment it already has, at the highest level, to achieve maximum return on investment. The Brocade MLXe provides additional benefits for FleetCor by using less power and has a smaller footprint than competitive routers; critical in power-and space-constrained locations in order to allow for growth. The Brocade MLXe also enables continuous business operation for FleetCor based on Multi-Chassis Trunking, massive scalability supporting highest 100 GbE density in the industry with no performance degradation for advanced features like IPv6 and flexible chassis options to meet network and business requirements.
The Brocade ServerIron ADX Series of high-performance application delivery switches provides FleetCor with a broad range of application optimization functions to help ensure the reliable delivery of critical applications. Purpose-built for large-scale, low-latency environments, these switches accelerate application performance, load-balance high volumes of data and improve application availability while making the most efficient use of the company's existing infrastructure. It also delivers dynamic application provisioning and de-provisioning for FleetCor's highly virtualized data center, enables seamless migration and translation to IPv6 with unmatched performance.
As an added benefit for its bottom line, through the use of Brocade ADX Series switches and Brocade MLX™ Series routers FleetCor has eliminated thousands of costly networking cables, saving it hundreds of thousands of dollars and allowing the company to segment, streamline and secure its network. FleetCor has also been able to easily integrate Brocade network technology with third-party offerings already installed in the network, for complete investment protection. FleetCor anticipates moving to 10 Gigabit Ethernet (GbE) solutions for its backbone switch in the near future.
"We wanted a dependable, secure, redundant, 24 by 7 backbone switch in each of our data centers to help us leverage the benefits of cloud computing and the Brocade MLXe delivered on all fronts," said Keirbeck. "By virtualizing our data center, Brocade allows for non-stop access to the mission-critical data that FleetCor and its customers rely on every day. We chose the Brocade MLXe because of the tremendous results we already saw from our existing Brocade solutions and the exceptional support and service."
According to a report from analyst firm Gartner, "Although 'economic affordability' is an immediate, attractive benefit, the biggest advantages (of cloud services) result from characteristics such as built-in elasticity and scalability, reduced barriers to entry, flexibility in service provisioning and agility in contracting."(1)
Social Media Tags: Brocade, LAN, Local Area Network, ADX, ServerIron, MLX, MLXe, reliability, scalability, security
(1)Gartner " Cloud-Computing Service Trends: Business Value Opportunities and Management Challenges, Part 1" February 23, 2010
Brocade, the B-wing symbol, DCX, Fabric OS, and SAN Health are registered trademarks, and Brocade Assurance, Brocade NET Health, Brocade One, CloudPlex, MLX, VCS, VDX, and When the Mission Is Critical, the Network Is Brocade are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners.
Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government.
AndyGroth 060000C0EQ Tags:  smartcloud vmware paas kvm ibm_workload_deployer cloud red_hat 6,359 Views
Note: This is a (slightly updated) re-post from a personal blog - just my view in the context of IBM's drive to foster open choice and collaboration.
Please bear in mind that this is based on my personal thoughts (not an official IBM position) and read the article as it is intended to be - thought provoking ... enjoy!
Having returned from the European Red Hat Partner summit and the VMware vForum where I presented on behalf of IBM, it took me a while to digest the “openness” of it all …so let me share my thoughts retrospectively.
Being proprietary rocks…! (?)
Public cloud can only exist on open source … ?
There was a bold statement by a speaker at the Red Hat summit: “Public cloud can only live on open source!”
In the meanwhile keep your eyes peeled and expect the industry to increase focus on enabling hybrid connectors - I obviously can't make any specific forward-looking statements from an IBM perspective. But just take Red Hat as an example, it made clear that CloudForms (their IaaS platform) can indeed manage VMware though their DeltaCloud driver and – while currently positioning CloudForms for private and hybrid – their vision (of course) is for DeltaCloud to be the top-level public layer linking into private (or public) VMware clouds.
Red Hat recently announced their hosted “OpenShift” PaaS platform which essentially allows developing and running Java, Ruby, PHP and Python applications and comes in 3 different editions. From 1) “Express” (free) which provides a runtime environment for simple Ruby, PHP and Python apps over 2) “Flex” for multi-tiered Java and PHP apps with more options (like mySQL DBs and JBoss middleware) to full control with the “Power” edition supporting “any application or programming language that can compile on RHEL 4, 5, or 6″ and enables to deploy apps directly on EC2 and (in the near future) to IBM’s SmartCloud.
VMware had before announced their own open “Cloud Foundry” PaaS project, it has incarnations as fully hosted service (currently in beta), as open source project (CloudFoundry.org) or a free single PaaS instance for local development use.
So what's IBM doing in this space? IBM has recently announced the IBM Workload Deployer - an evolution of the WebSphere CloudBurst hardware appliance. It essentially stores and secures "WebSphere Application Server Hypervisor Edition images" and more importantly workload patterns which can be published into a cloud. These workload patterns (think of them as customizable templates that capture settings, dependencies and configuration required to deploy applications) enable you to focus on what essentially differentiates PaaS from IaaS ... the application rather than the infrastructure. Dustin Amrhein explains this much better than me in this little blog.
So, yes, I honestly believe that KVM has a good chance to become hypervisor of choice for public cloud. However … that is unlikely to be the control point… . So which management platform(s) will take that all important crown …? Will it be an OSS based one? I don’t want to hazard a guess, there are many …and that is part of the problem, many argue that the open source “communities” will have to overcome a challenge and become a COMMUNITY if they want to succeed. ESX could not be beaten with 7 or 8 different (but weak) flavours of Xen and that was just a single OSS project splintered by commercial offerings … in the same way the sea of OSS based cloud controllers with eucalyptus, openstack, cloudstack, deltacloud, opennebula faces focussed (more proprietary) heavy-weights like Microsoft, Google and Amazon.
The increasing number of OSS management solutions and “open bodies” will also make e.g. VMware less nervous than intended as long as they indirectly compete with each other …
BUT (and it’s a big “but”) I would argue that anyone not strategically looking at these open solutions is at best ignorant or – e.g. if you are a service provider yourself – more likely long-term professionally suicidal … yes, in an ideal world everyone wants ‘today’s best of breed' but more critically you have to maintain your negotiation potential through the ability to switch and if only for that reason alone you need to keep your options open!
It will be of the utmost importance to partner with solution providers who share this mind-set and have the capability and strategy to support such a long-term goal and yes, IBM is clearly uniquely positioned to fulfill this role.
And while I spoke to many completely different clients at both events, that was a common concern raised by most of them.
Industry endorsement like the recent OVA announcement - with IBM being a major driving force and supporter - will help to give KVM the needed credibility and weight … I am looking forward to seeing these visions translated into tangible solutions.
- Test Drive the IBM SmartCloud with this simulator...
- CloudForms (IaaS) is in beta with availability planned for fall 2011
JeffHebert 060001UEQ2 Tags:  storage scalable secure emerging paas technolgy networking iaas servers cloud available reliable saas 4,323 Views
Great Video. There are a great many folks that have already started making the journey into the clouds and are not fully aware; If you consider that most of all large Enterprise Data centers are consolidating and visualizing servers, storage and networking today, and after all, when you get all 3 of those areas consolidating and visualizing you are transforming business processes and will eventually reach a point when infrastructure/information on demand will be the next logical step.
Cloud Service Provider Platform (CSP2)
Till now we have seen through the earlier posts – what are
the essentials to go about creating a cloud environment – that consists of the management
platform as well as the managed environment. We have seen the critical
roles and organizations involved as well as the importance of Cloud
Service Strategy and Cloud
Service Design. We also saw the criticality of the need for a Cloud
Computing Reference Architecture (CCRA) to tie all the solution elements
together. We also saw how IBM
Service Delivery Manager (ISDM) which is an enterprise cloud solution based
Service Automation Manager (TSAM) can be deployed as a set of virtual
images that automate IT service deployment and provide resource monitoring,
cost management, and provisioning of services in the cloud.
The IBM Cloud Service Provider Platform is specifically tailored to the needs of CSPs and is designed to help them successfully:
Figure 1 IBM Integrated Service Management Solution for Cloud Service Providers
IBM Cloud Service Provider Platform is an integrated Service Management for Cloud Service Providers is built upon around a core Service Automation and Management component provided by ISDM. Beyond the core, IBM’s Integrated Service Management for Cloud Service Providers makes available four extensions—network management, security management, storage management, and advanced monitoring and service level management—that enables a comprehensive management offering.
Communications service providers (CSPs) around the world are looking for smarter ways of doing business. They are being challenged to transform the way services are created, managed, and delivered. CSP2 neatly integrates and extends the SPDE (Service Provider Delivery Environment) for Communication Service Providers to build the ecosystem to become a cloud service provider. For a cloud based business strategy - check out the video from Scott on the value of CSP2 for CSPs.
In this article learn how to:
Set up a 64-bit Linux instance (a Bronze-level offering) with the Linux Logical Volume Manager (LVM).
Capture a private image and provision it as a new Platinum instance.
Grow the LVM volume and file system to accommodate the new physical volumes.
Configure LVM across physical volumes using Linux LVM-type partitions.
Background on LVM and the test scenario
First, a description of LVM concepts and the test scenario for those who may not be familiar with LVM.
Note: You are about to configure Linux LVM: Here be Dragons. Mind the gap.
The Linux LVM is organized into physical volumes (PVs), volume groups (VGs), and logical volumes (LVs)
Sreek Iyer 2000001K7N Tags:  stepbystep tsam cloud_certification chapter16 tivoli cloud cloud-computing isdm 1 Comment 5,577 Views
Capacity Planning for the Management Platform
The management platform sizing means sizing for the following components that provides the functional capabilities
Further the sizing will be affected based on the non-functional consideration that needs to be addressed by each of these components of the management platform. One should review the performance reports and workload pattern/handling capabilities of each of the products selected to validate the sizing considered can meet the non-functional requested by the solution.
The size of the management platform depends on the size of the managed environment. It is
preferred to keep a centralized management environment and scale it as needed
when the managed environment grows. This is often not an easy calculation or simple process. Need to apply pure engineering to plan the capacity for each capabilities. Apart from the capabilities discussed above, the following key areas also needs to be covered
Tivoli Service Automation Manager Version 7: Capacity Planning Cookbook is an excellent document covering the various aspects in detail as well as provide some samples.
This book also gives links to some of the other whitepapers that provides for interesting further reading material on the subject.
JeffHebert 060001UEQ2 Tags:  virtual paas enterprise elastic secure cloud iaas saas scalable ibm reliable 6,135 Views
JeffHebert 060001UEQ2 Tags:  iaas server ibm analytics virtualize cloud saas software paas storage 4,986 Views
How do I size my cloud?
A cloud is not a cloud if it is not elastic. The elastic property of the cloud to expand and shrink based on demand is possible only with a proper capacity planning. I feel the most difficult exercise to do while making a cloud solution is capacity planning for your cloud. By this, I mean you have to size
Most of the engagements that I’ve walked into might have some capacity or infrastructure that they want us to leverage and use it in the cloud. So the comparison becomes difficult if you don’t have a standard measuring unit for your infrastructure – for instance how do you know a Quadcore on an intel platform compares to power7 core. So I found a good explanation in this guide, in this interesting article –
The answer to the difficult question was to use something called the cloud CPU unit which is nothing but the computing power equal to the processing power on a one gigahertz CPU. When a user requests two CPUs, for example, they will get the processing power of two 1 GHz CPUs. This means that a system with two CPUs, each with four cores, running at 3 GHz will have the equivalent of 24 CPU units (2CPUs x 4Cores x 3GHz = 24CPU Units).
The other dimension of the complexity is to determine the resource needs and do the trends and forecasting. I typically collect the projections from the clients and then put down some critical assumptions to determine how big my cloud should be. Some critical questions that I typically ask
IBM infrastructure planner for cloud made life easy for me that had a user friendly interface to take me through these steps and arrive at a sizing for the managed environment. Once we know the managed environment, we can make the sizing of the management platform. The details of how to plan the managed environment, I’ll discuss in my next post.
I’ll be interested in putting together the top 10 parameters
that are critical for sizing the cloud managed and management environment. Look forward to your comments.