Modified on by PhilipP.
Throughout the years, the growing needs for storage led to the invention of simpler ways to manage file storage. But this isn’t an easy task because storage users also want efficiency and control together with great service levels, all necessary things in order to keep up with an ever-changing business world. This is how file virtualization came into action.
Storage systems rely on special hardware, software, and disk drivers in order to deliver fast and reliable storage for data processing and computing. Managing storage and data is a complex and time-consuming activity; storage virtualization improves this, making it possible to have back-ups and achieving tasks in an efficient matter. This technique is by no means a new one, but it became hugely popular in recent years due to its many benefits. Storage virtualization basically aggregates the physical storage from multiple networks storage devices, so that it looks like a single storage device; beyond the reduced complexity of the entire storage management process, it also offers a better disaster recovery in case the storage area network (SAN) gets destroyed.
Block virtualization and file virtualization
There are two primary types of virtualization that occur when talking about storage systems: block-level virtualization and file-level virtualization. The first type is located in the SAN environment of large businesses and enterprises. It basically means that every block in a block-level system can be controlled as an individual hard drive and the blocks are managed by the server based operating systems.
A physical storage device has a number of blocks and a set of commands to read and write blocks of data which block-level virtualization imitates in order to transmit the same commands and get the same results. Using block-level storage protocols like iSCSI (Internet Small Computer Systems Interface), Fibre Channel or Fibre Channel over Ethernet, the storage blocks are made visible and accessible by the server-based operating system.
File system virtualization fundamentally refers to the aggregation of multiple file systems into a single, larger virtual file system. This technique serves applications that need to access data in the form of files rather than block-by-block and these files are usually found in file systems located on Network Attached Storage (NAS) devices.
One of the main purposes that led to the file virtualization was the desire to shelter the users from the complicated processes pertaining to the storage environment. This storage technology is usually configured using a protocol such as Network File System (NFS) or Server Message Block (SMB)/ Common Internet File System (CIFS).
The advantages of using file virtualization
The virtual file system interface, usually referred to as the v-nod interface is responsible for creating a connection between the physical systems - containing the actual data that is stored – and the logical file systems, which are basically a representation of the physical files. File virtualization has the ability to eliminate the dependencies between the data accessed and the location where the files are physically stored. This means that storage usage and server consolidation have an improved efficiency and that file-migrations can be done with minimized disruptions. This type of virtualization favors a certain level of flexibility for storage managers and increases data mobility thanks to the location independence between file systems, files from the applications and users. It can also provide a solution to file size limitations because it can save storage space for files on different servers.
File virtualization: a global namespace
One of the key benefits brought by file virtualization is the fact that it provides a global namespace to index files on network file servers. A global namespace has the role of simplifying storage management in environments where there are many physical file systems. It’s also a good solution for fast growing environments where data needs to be accessed without having to know where it’s physically located. A global namespace gives an organization the opportunity to access a virtualized file system namespace, while it can also open more storage pools and help reduce the number of mount points in an environment.
All in all, the evolution of storage virtualization means that previously unstructured data has become easier to manage, thus revolutionizing the technological world by transforming multiple network storage devices into what basically amounts to a single storage unit.
Modified on by Dave_Greenfield
With any new technology, there’s “fake news”, and SD-WANs are no exception. It’s true, SD-WANs probably won’t reduce your WAN costs by 90 percent or make WANs so simple a 12-year old can deploy them. But there are plenty of reasons to be genuinely excited about the technology -- and we’re not just talking about cost savings. Often, these “other” reasons get lumped into the catechisms of greater “agility” and “ease of use,” but here’s what all of that really means.
Align the Network to Business Requirements
When organizations purchase computers for employees, they try to maximize their investment by aligning device cost and configuration to user function. Developers receive machines with fast processors, plenty of memory, and multiple screens. Salespeople receive laptops and designers get great graphics adapters (and Apples, of course).
SD-WANs allow us to do the same with the WAN. We can maximize our WAN investment by aligning the type of connectivity to business requirements. Connectivity can be tweaked based on availability options, types of transport, load balancing options, and more. Examples include:
- Mission critical locations, such as datacenters or regional hubs. These can be connected by active-active, dual-homed fiber connections managed and monitored 24x7 by an external provider -- and with a price tag that approaches MPLS.
- A single, xDSL connection. This can connect small offices or less critical locations for significant savings as compared against MPLS.
- Short-term connections. These can be set up with 4G/LTE and, depending on the service, mobile users can be connected with VPN clients.
All are governed by the same set of routing and security policies used on the backbone. By adapting the configuration to location requirements, businesses are able to improve their return on investment (ROI) from SD-WANs.
Easy and Rapid Configuration
For years, WAN engineering has meant learning CLIs and scripts, mastering protocols like BGP, OSPF, PBR, and more. It was an arcane art, and CCIEs were the master craftsmen of the trade. But for many companies, managing their networks in this way is too expensive and not very scalable. Some companies lack the internal engineering expertise, others have the expertise, but far too many elements in their networks.
SD-WANs may not make WANs simple, but they do allow your networking engineers to be more productive by making WANs much easier to deploy and manage. The “secret sauce” is extensive use of policies.
Policy configuration helps eliminate “snowflake” deployments, where some branch offices are configured slightly differently than other offices. Policies allow for zero-touch provisioning and deployment. Policies also guide application behavior, making it easier to deliver new services across the WAN without adversely impacting the network. With an SD-WAN, you really can drop-ship an appliance to Ittoqqortoormiit, Greenland and have just about anyone install the device.
Limit Spread of Malware
SD-WANs position an organization to stop attacks from across the WAN. The MPLS networks that drive most enterprises were deployed at a time when threats predominantly came from outside the company. “Security” meant protecting the company’s central Internet access point and deploying endpoint security on clients. Once inside the enterprise, though, many WANs are flat-networks with all sites being able to access one another. Malware can move laterally across the enterprise easily, as happened in the Target breach that exposed 40 million customer debit and credit card accounts.
SD-WANs start to address some of these challenges by segmenting the WAN at layer three (actually, layer 3.5, but let’s not get picky) with multipoint IPsec tunnels. The SD-WAN nodes in each location map VLANs or IP address ranges to the IPsec tunnels (the “overlays”) based on customer-defined policies. Users are limited to seeing and accessing the resources associated with that overlay. As such, rather than being able to attack the complete network, malicious users can only attack the resources accessible from their overlays. The same is true with malware. Lateral movement is limited to other endpoints in the overlay -- not the entire company.
Don’t Sweat the Backhoe
As much as MPLS service providers manage their backbones, none of that would protect you from the errant backhoe operator, the squirrels, or anyone of a dozen other “mishaps” that break local loops. Redundant connections are what’s needed.
With MPLS, that would normally mean connecting a location with an active MPLS line and a passive Internet connection that’s only used for an outage. Running active-active is possible, but can introduce routing loops or make route configuration more complicated. Failover between lines with MPLS is based on DNS or route convergence, which takes too long to sustain a session. Any voice calls, for example, in process at the moment of a line outage will be disrupted as the sessions switch onto a secondary line.
With SD-WANs use of tunneling, running active-active is not an issue. The SD-WAN node will load balance the connections and maximize their use of available bandwidth. Determination to use one path or another is driven by the same user-configured traffic policies that drive the SD-WAN. Should there be a failure, some SD-WANs can failover to secondary connections (and back) fast enough to preserve the session. The customer’s application policies continue to determine access to the secondary line with the additional demand.
Conventional enterprise wide area networks are a hodgepodge of routers, load balancers, firewalls, next generation firewalls (NGFW), anti-virus and more. SD-WANs change all of that with a single consistent policy-based network, making it far easier to configure, deploy, and adapt the WAN. As SD-WANs adapt to evolve and include security functions as well, the agility and usability of SD-WANs will only grow.
Dave Greenfield is a secure networking evangelist at Cato Networks
I'll make no bones about the fact that I'm a huge fan of Cloud Foundry. It's the right play, by the right people at the right time. Despite all the attempts to dilute the message over the last eleven years, Platform as a Service (or what was originally called Framework as a Service) is about write code, write data and consume services. All the other bits from containers to the management of such are red herrings. They maybe useful subsystems but they miss the point which is the necessity for constraint.
Constraint (i.e. the limitation of choice) enables innovation and the major problem we have with building at speed is almost always duplication or yak shaving. Not only do we repeat common tasks to deploy an application but most of our code is endlessly rewritten throughout the world. How many times in your coding life have you written a method to add a new user or to extract consumer data? How many times do you think others have done the same thing? How many times are not only functions but entire applications repeated endlessly between corporate's or governments? The overwhelming majority of the stuff we write is yak shaving and I would be honestly surprised if more than 0.1% of what we write is actually unique.
Now whilst Cloud Foundry
has been doing an excellent job of getting rid of some of the yak shaving, in the same way that Amazon kicked off the removal of infrastructure yak shaving - for most of us, unboxing servers, racking them and wiring up networks is a thankfully an irrelevant thing of the past - there is much more to be done. There are some future steps that I believe that Cloud Foundry needs to take and fortunately the momentum is such behind it that I'm confident of talking about them here without giving a competitor any advantage.
First, it needs to create that competitive market of Cloud Foundry providers. Fortunately this is exactly what it is helping to do. That market must also be focused on differentiation by price and quality of service and not the dreaded differentiation by feature (a surefire way to create a collective prisoner dilemma and sink a project in a utility world). This is all happening and it's glorious
Second, it needs to increasingly leave the past ideas of infrastructure behind and by that I mean containers as well. The focus needs to be server less i.e. you write code, you write data and you consume services. Everything else needs to be buried as a subsystem. I know analysts run around going "is it using docker?" but that's because many analysts are halfwits who like to gabble on about stuff that doesn't matter. It's irrelevant. That's not the same as saying Docker is not important, it has huge potential as an invisible subsystem.
are fine but not enough. It needs to go deeper (hint to providers - that needs an asynchronous mechanism of logging, things like network taps, it worked a treat in 2005).
Fourth, and most importantly, it needs to tackle yak shaving at the coding level. The simplest way to do this is to provide a CPAN like repository which can include individual functions as well as entire applications (hint. Github probably isn't upto this). One of the biggest lies of object orientated design was code re-use. This never happened (or rarely did) because no communication mechanism existed to actually share code. CPAN (in the Perl world) helped (imperfectly) to solve that problem. Cloud Foundry needs exactly the same thing. When I'm writing a system, if I need a customer object, then ideally I should just be able to pull in the entire object and functions related to this from a CPAN like library because lets face it, how many times should I really have to write a postcode lookup function?
But shouldn't things like postcode lookup be provided as a service? Yes! And that's the beauty.
By monitoring a CPAN like library you can quickly discover (simply by examining meta data such as downloads, changes) as to what functions are commonly being used and have become stable. These are all candidates for standard services to be provided into Cloud Foundry and offered by the CF providers. Your CPAN environment is actually a sensing engine for future services and you can use an ILC like model to exploit this. The bigger the ecosystem is, the more powerful it will become.
I would be shocked if Amazon isn't already using Lambda and the API gateway to identify future "services" and Cloud Foundry shouldn't hesitate to press any advantage here. This process will also create a virtuous cycle as new things which people develop that are shared in the CPAN library will over time become stable, widespread and provided as services enabling other people to more quickly develop new things. This concept of sharing code and combing a collaborative effort of the entire ecosystem was a central part of the Zimki play and it's as relevant today as it was then. By the way, try doing that with containers. Hint, they are way too low level and your only hope is through constraint such as that provided in the manufacture of uni-kernels.
There is a battle here because if Cloud Foundry doesn't exploit the ecosystem and AWS plays its normal game then it could run away with the show. The danger of this seems slight at the moment (but it will grow) because of the momentum with Cloud Foundry and because of the people running the show. Get this right and we will live in a world where not only do I have portability between providers but when I come to code my novel idea for my next great something then I'll discover that 99% of the code has already been done by others. I'll mostly need to stitch all the right services and functions together and add a bit extra.
Oh, but that's not possible is it? In 2006, Tom Inssam wrote for me and released live to the web a new style of wiki (with client side preview) in under an hour using Zimki. I wrote an internet mood map and basic trading application in a couple of days. Yes, this is very possible. I know, I experienced it and this isn't 2006, this is 2016!
Cloud Foundry (with a bit of luck) might finally release the world from the endless Yak shaving we have to endure in IT. It might make the lie of object re-use finally come true. The potential of the platform space is vastly more than most suspect and almost everything, and I do mean everything will be rewritten to run on it.
I look forward to the day that most Yaks come pre-shaved. For more read
Modified on by SohilShah
Microservice architecture resembles a Service Oriented architecture in the part that both rely on cohesive
loosely coupled services, strung together to provide a solution. Beyond this similarity, the common nature
of the architecture seems to end. Microservice architetcure consists of completely decoupled services orchestrated
with each other via REST+HTTP API interfaces. These services can each be running in its own environment including
different programming languages. Each service can have a different deployment/management cycle while keeping the
final solution consistent.
Docker is a technology that is naturally suited for building an Application with a Microservice architecture.
You can visualize Docker as a wafer thin Linux VM that can host multiple containers without a tight dependency
on the host Operating System. Compared to a regular VM, it is light weight and more manageable. You can have
Microservices loaded into individual containers each completely isolated in environment from each other.
Besides Isolation, Docker also provides a consistent environment between code movement from Development -> QA
-> Production. Developers can have development time Docker containers that run on minimal hardware resources
and have the same code deployed consistently in maximized Production environments including Cloud and PAAS/IAAS
infrastructures. Same code, different scale!!!
With that in mind, I would like to cover the implementation details associated with developing Java Microservices
in a Docker environment.
This step involves developing the Java Microservice. Keeping the scope of the blog in mind, this microservice can be downloaded and run with the following instructions
Implementation details about the Microservice can be studied in the source code by loading the project into your preferred Java IDE such as Eclipse.
Before the Microservice can be run inside Docker, the Docker technology must be installed on your local machine. You can follow step-by-step Docker installation procedure at: Docker Installation
Once Docker is installed correctly, you can test your installation using the following command:
Create a Microservice Docker Image
In the Docker ecosystem, there are two main concepts to understand.
- Docker container: A Docker container is a lightweight instance of a Linux based OS running on top of your host Operating System
- Docker image: Docker image represents your Application software + entire environment running inside a container
For the above microservice, the container loads the microservice image, and as part of this image it not only loads the Application Code for the microservice, but also the Java 8 environment it needs to run the microservice.
But, before you can load the microservice into Docker, you need to create a Docker image for that software. The steps to create the image are as follows:
|Create a directory next to your microservice project
|Copy microservice artifacts to the build directory
cp target/hello-microservice-1.0-SNAPSHOT.jar ../hello-microservice-build
cp hello-microservice.yaml ../hello-microservice-build
|Create the Dockerfile file under hello-microservice-build directory
ADD hello-microservice-1.0-SNAPSHOT.jar hello-microservice-1.0-SNAPSHOT.jar
ADD hello-microservice.yaml hello-microservice.yaml
CMD java -jar hello-microservice-1.0-SNAPSHOT.jar server hello-microservice.yaml
|From the Docker session, goto the hello-microservice-build directory and issue the command
||docker build -t hello-microservice-local .
The Docker build process uses a file named Dockerfile to get its instructions about what to do when building an image. In this particular microservice, the Dockerfile instructs the Docker system to download an image called 'java:8'. This is the core infrastructure needed to run the microservice. Next it adds the microservice jar and configuration to the image. And later, it exposes the ports 9000 and 9001 to service the requests.
docker build -t hello-microservice-local . (is the command that processes the Dockerfile and produces the hello-microservice-local image)
Note: make sure this command is issued from the Docker session and not just any command line session.
Once this Java Microservice Docker image is created, it must be run inside a Docker container using the following command:
|docker run -p 9000:9000 --name hello-microservice-local -t hello-microservice-local
|-p: exposes port 9000 to the host machine port
You can test the Microservice in the browser using: http://localhost:9000/java/microservice
Once this works, you can stop the microservice using: docker stop hello-microservice-local
Publish the Microservice Docker Image
Now that you have the Microservice Docker Image working locally, you can publish this image to DockerHub to share with your team. This can be accomplished as follows:
Steps to post your local image to the remote repository
|Get the image id of the local hello-microservice-local image
|Tag the local image for push to remote repository
docker tag localimageid yourdockerhubusername/hello-microservice-remote:latest
|Push to the remote repository
docker push yourdockerhubusername/hello-microservice-remote
Before testing the remote image, you need to delete the local images. Get the image id for both 'hello-microservice-local' and 'hello-microservice-remote' using: docker images
and remove the two images using the command: docker rmi -f imageid
Once, the images are removed, you can test the remote image using the following command:
|docker run -p 9000:9000 --name hello-microservice-remote -t yourdockerhubusername/hello-microservice-remote
The challenges of
virtualized environments are driving the shift to greater integration of
service management capabilities such as image and patch management, high-scale
provisioning, monitoring, storage and security. Join us for this webcast to learn how
organizations can realize the full benefits of virtualization to reduce
management costs, decrease deployment time, increase visibility into
performance and maximize utilization.
If you're in North America, register here for the April 16th session:
If you're in Asia Pacific, register for the April 23rd session:
Even though server proliferation can be partially addressed through virtualization, the usage of virtual and physical assets becomes complex to accurately assess or manage. Cost management is crucial to integrate into overall service management, especially with a move into cloud. This webcast discusses how to implement a financial management roadmap and the key requirements for cloud transparency-- the ability to allocate IT costs, usage, and value.
Register today: http://bit.ly/VXXxl3
DevOps has become something of a buzzword lately but the idea behind it can be truly powerful. Using a combination of technology and best practices to increase collaboration between development and operations teams can accelerate the application development lifecycle while improving software quality and reducing costs.
Here’s how IBM is addressing DevOps, with the launch of SmartCloud Continuous Delivery--an agile, scalable and flexible solution for end-to-end lifecycle management that allows organizations to reduce software delivery cycle times and improve quality. Learn more: http://ibm.co/UeAl0B
The challenges of managing virtualized environments are mounting. The benefits of virtualization—from cost and labor savings to increased efficiency—are being threatened by its staggering growth and the resultant complexity. A critical piece to solving these challenges, as many organizations have already discovered, is image management. Read more: http://ibm.co/SpHTlV
Hope you will enjoy the reading and provide your comments. Especially wanted to highlight how can we improve Trust ..
Trust in cloud can be established with the same principles that we use for traditional service management (read my earlier post on Cloud Computing Central for details):
- Visibility – The ability to see everything that’s going on across the infrastructure
- Control – The ability to keep the infrastructure in its desired state by enforcing policies
- Automation – The ability to manage huge and growing infrastructures while controlling cost and quality.
Leverage IBM QRadar Security Information and Event Management (SIEM)
for building the visibility into your cloud infrastructure.
With the proliferation of cloud computing, many businesses are starting
to adopt a service provider model—either as a deliberate strategy to
establish new revenue streams or, in some cases, inadvertently to
support the growing needs of their organizations. This is especially
true for companies with diverse needs, whether they’re tech companies
with dev teams churning out new apps and services, or business owners
driving requirements for SaaS services and cloud capabilities to enhance
their data center operations.
Read more about provisioning and orchestration capabilities
to meet growing business needs
Glad to let the cloud computing central members know that I've also started writing on ThoughtsonCloud
- the IBM cloud experts blog. Please read my first post on
-about Maximizing the value of cloud for small and medium enterprises (SMEs)
. and let me know your comments and feedback. Thanks
Computing is a term that is often bandied about the web these days and
often attributed to different things that -- on the surface -- don't
seem to have that much in common. So just what is Cloud Computing? I've
heard it called a service, a platform, and even an operating system.
Some even link it to such concepts as grid computing -- which is a way
of taking many different computers and linking them together to form one
very big computer.
basic definition of cloud computing is the use of the Internet for the
tasks you perform on your computer. The "cloud" represents the Internet.
Cloud Computing is a Service
The simplest thing that a computer does is allow us to store and
retrieve information. We can store our family photographs, our favorite
songs, or even save movies on it. This is also the most basic service
offered by cloud computing.
a great example of cloud computing as a service. While Flickr started
with an emphasis on sharing photos and images, it has emerged as a great
place to store those images. In many ways, it is superior to storing
the images on your computer.
Flickr allows you to easily access your images no matter where you are
or what type of device you are using. While you might upload the photos
of your vacation to Greece from your home computer, you can easily
access them from your laptop while on the road or even from youriPhone while sitting in your local coffee house.
Second, Flickr lets you share the images. There's no need to burn them to a compact disc or save them on a flash drive. You can just send someone your Flickr address.
Flickr provides data security. If you keep your photos on your local
computer, what happens if your hard drive crashes? You'd better hope you
backed them up to a CD or a flash drive! By uploading the images to
Flickr, you are providing yourself with data security by creating a
backup on the web. And while it is always best to keep a local copy --
either on your computer, a compact disc or a flash drive -- the truth is
that you are far more likely to lose the images you store locally than
Flickr is of losing your images.
This is also where grid computing comes
into play. Beyond just being used as a place to store and share
information, cloud computing can be used to manipulate information. For
example, instead of using a local database, businesses could rent CPU
time on a web-based database.
downside? It is not all clear skies and violin music. The major
drawback to using cloud computing as a service is that it requires an
Internet connection. So, while there are many benefits, you'll lose them
off if you are cut off from the Web.
Cloud Computing is a Platform
The web is the operating system of the future. While
not exactly true -- we'll always need a local operating system -- this
popular saying really means that the web is the next great platform.
a platform? It is the basic structure on which applications stand. In
other words, it is what runs our apps. Windows is a platform. The Mac OS
is a platform. But a platform doesn't have to be an operating system.
Java is a platform even though it is not an operating system.
Through cloud computing, the web is becoming a platform. With trends such as Office 2.0,
we are seeing more and more applications that were once the province of
desktop computers being converted into web applications. Word
processors like Buzzword and office suites likeGoogle Docs are
slowly becoming as functional as their desktop counterparts and could
easily replace software such as Microsoft Office in many homes or small
But cloud computing transcends Office 2.0 to deliver applications of all shapes and sizes fromweb mashups to Facebook applications to web-based massively multiplayer online role-playing games.
With new technologies that help web applications store some information
locally -- which allows an online word processor to be used offline as
well -- and a new browser called Chrome to push the envelope, Google is a major player in turning cloud computing into a platform.
Cloud Computing and Interoperability
A major barrier to cloud computing is the interoperability of
applications. While it is possible to insert an Adobe Acrobat file into a
Microsoft Word document, things get a little bit stickier when we talk
about web-based applications.
is where some of the most attractive elements to cloud computing --
storing the information on the web and allowing the web to do most of
the 'computing' -- becomes a barrier to getting things done. While we
might one day be able to insert our Google Docs word processor document
into our Google Docs spreadsheet, things are a little stickier when it
comes to inserting a Buzzword document into our Google Docs spreadsheet.
for a moment that Google probably doesn't want you to have the ability
to insert a competitor's document into their spreadsheet, this creates a
ton of data security issues. So not only would we need a standard for
web 'documents' to become web 'objects' capable of being generically
inserted into any other web document, we'll also need a system to
maintain a certain level of security when it comes to this type of data
Possible? Certainly, but it isn't anything that will happen overnight.
What is Cloud Computing?
brings us back to the initial question. What is cloud computing? It is
the process of taking the services and tasks performed by our computers
and bringing them to the web.
What does this mean to us?
With the "cloud" doing most of the work, this frees us up to access the
"cloud" however we choose. It could be a super-charged desktop PC
designed for high-end gaming, or a "thin client" laptop running the
Linux operating system with an 8 gig flash drive instead of a
conventional hard drive, or even an iPhone or a Blackberry.
can also get at the same information and perform the same tasks whether
we are at work, at home, or even a friend's house. Not that you would
want to take a break between rounds of Texas Hold'em to do some work for the office -- but the prospect of being able to do it is pretty cool.
One of the exciting and valuable characteristics of IBM SmartCloud Enterprise is it's tight linkage with the IBM Software Group portfolio of offerings. In addition to the offerings from IBM Software Group, innovative software vendors are making exciting offerings available as well. There is an ever-growing list of offerings available to IBM SmartCloud Enterprise customers. These recent additions are now in the SmartCloud Enterprise public catalog and available to you to use.
BYOL - Bring Your Own License; PAYG - Pay As You Go
IBM Business Process Manager is a comprehensive BPM platform giving you visibility and insight to manage business processes. It scales smoothly and easily from an initial project to a full enterprise-wide program. IBM Business Process Manager harnesses complexity in a simple environment to break down silos and better meet customer needs.
The following BPM images are now available in the catalog:
IBM Process Center Advanced 7.5.1 64b - BYOL
IBM Process Center Standard 7.5.1 64b - BYOL
IBM Integration Designer 7.5.1 64b - BYOL
IBM Process Server Advanced 7.5.1 64b - BYOL
IBM Process Server Standard 7.5.1 64b - BYOL
IBM Process Designer 7.5.1 64b - BYOL, PAYG
IBM BPM Express 7.5.1 64b - BYOL, PAYG
IBM WebSphere Service Registry and Repository (WSRR) is a system for storing, accessing and managing information, commonly referred as service metadata, used in the selection, invocation, management, governance and reuse of services in a successful Service Oriented Architecture (SOA). In other words, it is where you store information about services in your systems, or in other organizations' systems, that you already use, plan to use, or want to be aware of.
The following WSRR images are now available in the catalog:
IBM WebSphere Service Registry 64bit BYOL
IBM Image IBM WebSphere Service Registry 220.127.116.11 64bit BYOL
IBM WebSphere Message Broker (WMB) delivers an advanced Enterprise Service Bus (ESB) that provides connectivity and universal data transformation for both standard and non-standards-based applications and services to power your SOA.
The following WMB images are now available in the catalog:
IBM WebSphere Message Broker 18.104.22.168 64b BYOL
IBM SPSS Decision Management enables business users to automatically deliver high-volume, optimized decisions at the point of impact to achieve superior results.
The following SPSS image is now available in the catalog
IBM SPSS Decision Management 6.2 64b BYOL
From our partner Riverbed comes Riverbed® Stingray™. This software-based application delivery controller (ADC) designed to deliver faster and more reliable access to public web sites and private applications.
The following Riverbed Stingray images are now available in the catalog:
Riverbed Stingray V 8.0 RHEL 6 32 bit BYOL
Riverbed Stingray V 8.0 RHEL 6 64 bit BYOL
Riverbed Stingray V 8.0 SLES 11 SP1 32 bit BYOL
Riverbed Stingray V 8.0 SLES 11 SP1 64 bit BYOL
Additionally, Alphinat SmartGuide provides visual, drag and drop tools that can help you quickly build interactive web dialogues that guide people to the relevant response, help them diagnose problems or lead them through a series of well-defined steps that make it easy to complete complex—or infrequently performed—tasks.
The following Alphinat SmartGuide images are now available in the catalog:
Alphinat SmartGuide 5.1.3 SLES 11 SP1 32-bit PAYG
Alphinat SmartGuide 5.1.3 SLES 11 SP1 32-bit BYOL
GridRobotics' Cloud Lab Grid Automation Server can manage any number of client or agent computers, which can be spun up automatically on public clouds like IBM SCE or private clouds. Grid Robotics’ Cloud Lab Classroom is a virtual classroom management solution.
The following GridRobotics Cloud Lab images are now available in the catalog:
GridRobotics Cloud Lab Grid Automation Base Server 1.4 32b R2 - BYOL
GridRobotics Cloud Lab Classroom Base Server 1.4 32b R2 - BYOL
GridRobotics Cloud Lab Base Agent V 1.4 32b R2 - BYOL
We are committed to adding value continuously to IBM SmartCloud Enterprise to help you advance cloud in your organization.
Securing the Virtual Infrastructure
computing tests the limit of security operations and infrastructure from
various perspectives. Let us examine what
is different about Cloud Security and identify what are existing threats and what
are the new areas that we should be concerned about.
Figure 2 Cloud Security - Existing & New Threats
I think what make cloud security complex is the number of
layers involved in the cloud service stack and the number of components in each
layers. So it means
Increased infrastructure layers to
manage and protect
Multiple operating systems and
applications per server
More Components = More Exposure
As we can see we already do perimeter protection at the
network and operating systems as well as do physical and personnel security for
the traditional infrastructure. All of them holds good for cloud as well to combat
the existing threats at these layers.
us examine what are the new points of exposure with cloud. Security and resiliency complexities are raised
by virtualization and automation which are essentials to cloud. The new risks
Cloud Service Management Vulnerabilities
Secure storage of VMs and the
Managing identities on the
increasing number of virtual assets
Stealth rootkits in hardware now possible
Virtual NICs & Virtual Hardware
Virtual sprawl, VM stealing
Dynamic relocation of VMs
Elimination of physical boundaries
Manually tracking software and
configurations of VMs
managing these additional complexities, you need a reference model that is
comprehensive and covers security controls that can combat not only the
existing challenges but also the new challenges that cloud brings in.
Foundational Security controls for IBM cloud reference model (see below)
provides the different elements and controls required to build a secure cloud.
Figure 1 Foundation Security Controls for IBM Cloud
Managing datacenter identities (Identity and access
Management) is one of the top-most security concerns and we discussed how to
handle the same in my previous
post. I’ll discuss how to handle the
virtualization related threats in my next post.
Meanwhile let me know your comments on this reference model.
Do you think these set of controls are comprehensive. Do you see any areas not
covered from a cloud security perspective? If so, just add it as comment to
this post and let us discuss.
Managing Datacenter Identities for Cloud
challenges for cloud , I discussed Security as the top concern. I also
detailed the top concerns with regard to securing the cloud in the subsequent post.
Cloud computing tests the limits of security operations and infrastructure for
the various security and privacy domains
Cloud brings in lot of additional considerations like
multi-tenancy, data separation, virtualization etc. In a cloud environment,
access expands, responsibilities change, control shifts, and the speed of
provisioning resources and applications increases - greatly affecting all
aspects of IT security. We will discuss
the different security aspects classifying them against specific adoption
patterns (see post here).
The cloud enabled data center pattern is the more predominant one which has Infrastructure
and Identity management as the top concerns. Within cloud security doing the right design
for the infrastructure security is the important aspect – the details of which
and how it is done by different public clouds we discussed in the previous post.
Now with regard to Identity lets discuss the top requirements, use cases and
look at what solutions that we can provide to make the cloud secure. Lets start
with managing datacenter identities which is the top concern.
Managing Datacenter Identities
Identity and Access Control needs to deliver capability that
can be used to provide role based access to securely connect users to the cloud.
The users include the cloud service provider as well as consumer roles. Within
each user groups we need to support User as well as Administrator Roles. The
identity and access management should the 4As - Authentication, Authorization,
Auditing and Assurance.
For a cloud consumer user, it is
about making sure the user identity is verified and authenticated at the self
service portal and providing right access to the resource pools.
For the administrator, we need to
provide role based access to Service Lifecycle Management functions
We will need to integrate with
existing User Directory infrastructure (AD/LDAP/NIS) to extend the user
identity to the cloud environment as well.
Once in the cloud environment, we
need to automatically manage access to the cloud resources, through provisioning
and de-provision of resource profiles and users against the resources in the cloud
identity and access management systems. Manual processes to manage accounts for
users on various virtual systems and applications are not going to scale in a
cloud environment. The same is true with the manual processes to process
various audit logs to meet compliance and audit requirements
In massively parallel,
cloud-computing infrastructures involves enormous pools of external users as
well. We need to ensure smooth user experience for the users so that they don’t
need to enter their credentials multiple times to access various applications
hosted within the enterprise or by business partners and Cloud providers.
Management of user identities and
access rights across hosted, private and hybrid clouds for internal Enterpise
users is also a major challenge that includes
o Centralized user access management to on and off-premise applications
o Enables Federated Single Sign-on and Identity Mediation across
different service providers
Lets look at some of the capabilities that we can leverage
to solution these requiremnts.
IBM Security Identity and Access Assurance - provides
the following capabilities. These
capabilities enable clients to reduce costs, improve user productivity,
strengthen access control, and support compliance initiatives.
and policy-based user management solution that helps effectively manage
- Enterprise, Web, and
federated single sign on, inside, outside, and between organizations,
including cloud deployments.
and access support for files, operating platforms, Web, social networks,
and cloud-based applications.
with stronger forms of authentication (smart cards, tokens, one-time
passwords, and so on).
monitoring, investigating, and reporting on user activity across the
Tivoli Identity Manager complements its role management
capabilities with role mining and lifecycle management, provided by the
IBM Security Role and Policy Modeler component, which helps reduce time
and effort to design an enterprise role and access structure, and
automates the process to validate the access information and role
structure with the business.
Security Access Manager for Enterprise Single Sign-On offers wide
platform coverage, strong authentication enhancements, and simpler
deployments. It introduces 64-bit
operating system and application support, a virtual appliance for easier
installation and configuration of the server, expanded support for smart
cards, and simplified profiling.
Tivoli Federated Identity Manager offers additional Open Authorization
(OAuth) authorization standards support, (for business to consumer
deployments and utilization of cloud-based applications and identities),
enhanced security for Secure Hash Algorithm (SHA-2), usability
enhancements, and new Business Gateway capabilities.