Have you checked out the features in the new release of the IBM Smart Business Development and Test on the IBM Cloud? Well you should. Version 1.1 provides support for Virtual Private Networks and Virtual Local Area Networks plus new premium support services are now available. I've heard from my tweeps on Twitter that the new release rocks so had to share the news with all of you in our very cool developer community.
Okay so if you want to realize faster application deployment with reduced costs, you have to check out the IBM Cloud. You virtually have no infrastructure to maintain and benefit from pay-as-you-go pricing. And, you can set up more accurate test environments in minutes versus weeks using standardized configurations. Sound irresistible?
So you ask, what does this new release really mean for me as a developer? Well here's a quick summary of what Version 1.1 has to offer:
Security is a top priority, you can now use a VPN to access your machine instances on the IBM Cloud to provide virtual network isolation of your instances. Each VPN service consists of a private virtual LAN (VLAN) in an IBM Cloud Center of your choice plus a VPN gateway for accessing that VLAN. Pretty cool!
In addition, the VPN option allows isolation of your development and test environment on the IBM Cloud on a VLAN that only you can access. Plus your instance is not accessible from the Internet or from other instances unless you have provisioned them to use your private VLAN. Very secure.
New premium support services have been added. On top of the existing tech support, you may also purchase premium levels of support that include around-the-clock telephone support and a web-based ticketing system to submit and review service requests plus remote technical support to assist you in the use of the Cloud web portal, access to services, instance creation, and image management functions within the portal. And you have the ability to add Linux operating support for Linux OS provisioned through the Cloud web portal, including support for virtual machine instances. This is really awesome.
Driven by trends in the consumer internet, cloud computing
is becoming the new way to consume and deliver IT services.As an IT Professional, we need to understand
the different aspects of cloud to seize this opportunity to grow our career and
serve our clients towards a successful adoption of cloud computing.
I’m in the process of learning several aspects of cloud -
emerging trends in cloud solutions, workloads, infrastructure, technologies and
modern services industry.So thought of
this idea to post my learning as a series of blogs which any cloud enthusiast can
benefit to understand cloud computing.When
discussing a topic, instead of reinventing the wheel lets build the content
with links to different articles for further reading that can provide for a
The articles shall cover the entire lifecycle of a cloud
project covering various aspects right from the business requirements,
Architecture /Design, Implementation to Operations. The intention of this blog
is provide the reader a step by step any one or more of the following broad
range of topics
of Cloud Computing
Delivery Models - Infrastructure as a Service, Platform as a Service,
Software as a Service, Business Process as a Service
Deployment Models - Private Clouds, Public Clouds, Hybrid Clouds, Industry
Management - Asset Management, Business Resiliency, Service Management, Capacity
Planning, Charging models and economics,Usage Reporting, Billing &
Metering, Provisioning, Monitoring
We will have something to learn for every week and will
dedicate each week for understanding one of the above topics. So by the end of
16 weeks that we have remaining for the year, we would have learned all the
steps to walk on cloud.The comments to
these posts from all of the members would definitely go a long way in getting
our step right and enriching the content. So C’mon everyone, lets take a walk
in the clouds – step by step…
I just wanted to give everybody a quick update on the Cloud Certification, I took the pre-assessment exam, Test 000-032: Foundations of IBM Cloud Computing Architecture V1. I'm happy to say I received a passing score of 75%. The pre-assessment exam was broken down into three sections
Cloud Computing Concepts and Benefts
Cloud Computing Design Principles
IBM Software Cloud Computing Architecture
Believe it or not I had the most difficulty with section 3 , indentifying IBM software products. Maybe that's something with can discuss during our next study session. Has anybody else taken the pre-assessment exam? I'd like to hear your thoughts about it.
I'll make no bones about the fact that I'm a huge fan of Cloud Foundry. It's the right play, by the right people at the right time. Despite all the attempts to dilute the message over the last eleven years, Platform as a Service (or what was originally called Framework as a Service) is about write code, write data and consume services. All the other bits from containers to the management of such are red herrings. They maybe useful subsystems but they miss the point which is the necessity for constraint.
Constraint (i.e. the limitation of choice) enables innovation and the major problem we have with building at speed is almost always duplication or yak shaving. Not only do we repeat common tasks to deploy an application but most of our code is endlessly rewritten throughout the world. How many times in your coding life have you written a method to add a new user or to extract consumer data? How many times do you think others have done the same thing? How many times are not only functions but entire applications repeated endlessly between corporate's or governments? The overwhelming majority of the stuff we write is yak shaving and I would be honestly surprised if more than 0.1% of what we write is actually unique.
Now whilst Cloud Foundry has been doing an excellent job of getting rid of some of the yak shaving, in the same way that Amazon kicked off the removal of infrastructure yak shaving - for most of us, unboxing servers, racking them and wiring up networks is a thankfully an irrelevant thing of the past - there is much more to be done. There are some future steps that I believe that Cloud Foundry needs to take and fortunately the momentum is such behind it that I'm confident of talking about them here without giving a competitor any advantage.
First, it needs to create that competitive market of Cloud Foundry providers. Fortunately this is exactly what it is helping to do. That market must also be focused on differentiation by price and quality of service and not the dreaded differentiation by feature (a surefire way to create a collective prisoner dilemma and sink a project in a utility world). This is all happening and it's glorious.
Second, it needs to increasingly leave the past ideas of infrastructure behind and by that I mean containers as well. The focus needs to be server less i.e. you write code, you write data and you consume services. Everything else needs to be buried as a subsystem. I know analysts run around going "is it using docker?" but that's because many analysts are halfwits who like to gabble on about stuff that doesn't matter. It's irrelevant. That's not the same as saying Docker is not important, it has huge potential as an invisible subsystem.
Fourth, and most importantly, it needs to tackle yak shaving at the coding level. The simplest way to do this is to provide a CPAN like repository which can include individual functions as well as entire applications (hint. Github probably isn't upto this). One of the biggest lies of object orientated design was code re-use. This never happened (or rarely did) because no communication mechanism existed to actually share code. CPAN (in the Perl world) helped (imperfectly) to solve that problem. Cloud Foundry needs exactly the same thing. When I'm writing a system, if I need a customer object, then ideally I should just be able to pull in the entire object and functions related to this from a CPAN like library because lets face it, how many times should I really have to write a postcode lookup function?
But shouldn't things like postcode lookup be provided as a service? Yes! And that's the beauty.
By monitoring a CPAN like library you can quickly discover (simply by examining meta data such as downloads, changes) as to what functions are commonly being used and have become stable. These are all candidates for standard services to be provided into Cloud Foundry and offered by the CF providers. Your CPAN environment is actually a sensing engine for future services and you can use an ILC like model to exploit this. The bigger the ecosystem is, the more powerful it will become.
I would be shocked if Amazon isn't already using Lambda and the API gateway to identify future "services" and Cloud Foundry shouldn't hesitate to press any advantage here. This process will also create a virtuous cycle as new things which people develop that are shared in the CPAN library will over time become stable, widespread and provided as services enabling other people to more quickly develop new things. This concept of sharing code and combing a collaborative effort of the entire ecosystem was a central part of the Zimki play and it's as relevant today as it was then. By the way, try doing that with containers. Hint, they are way too low level and your only hope is through constraint such as that provided in the manufacture of uni-kernels.
There is a battle here because if Cloud Foundry doesn't exploit the ecosystem and AWS plays its normal game then it could run away with the show. The danger of this seems slight at the moment (but it will grow) because of the momentum with Cloud Foundry and because of the people running the show. Get this right and we will live in a world where not only do I have portability between providers but when I come to code my novel idea for my next great something then I'll discover that 99% of the code has already been done by others. I'll mostly need to stitch all the right services and functions together and add a bit extra.
Oh, but that's not possible is it? In 2006, Tom Inssam wrote for me and released live to the web a new style of wiki (with client side preview) in under an hour using Zimki. I wrote an internet mood map and basic trading application in a couple of days. Yes, this is very possible. I know, I experienced it and this isn't 2006, this is 2016!
Cloud Foundry (with a bit of luck) might finally release the world from the endless Yak shaving we have to endure in IT. It might make the lie of object re-use finally come true. The potential of the platform space is vastly more than most suspect and almost everything, and I do mean everything will be rewritten to run on it.
I look forward to the day that most Yaks come pre-shaved. For more read....
Cloud Security – The top most concern and Opportunity
First of all, wishing all my readers a
very happy and prosperous year 2012 ahead.
Few things happened towards the end
of the year which was significant to me. IBM acquired Q1 Labs to Drive Greater Security Intelligence and created a New Security Division. I also joined this
newly formed IBM Security Systems team last quarter as a solution architect for cloud security. This is a great time to be looking at cloud security. Happy to be on this new role where I can provide solution to customers to handle their cloud security concerns and make it easy for them to adopt cloud and innovate at a faster rate than before.
In my previous
post, we discussed security as the top most concern why customers and
enterprises are not adopting cloud.As
part of year’s posts, I plan to discuss the various security issues and aspects
of cloud computing.
We will explore to understand what are
the unique challenges with Cloud Security and discuss what aspects is important
for each customer
adoption pattern that we have seen.
We will also learn how the IBM Security
Framework can be used to address the various security challenges namely
forward to your comments and inputs in this journey of understanding the
security requirements for cloud and how we can overcome this major challenge to
cloud adoption using the World’s Most Comprehensive Security Portfolio – IBM
Security Systems. I’ll
try and elaborate the IBM Point of View on cloud security and discuss the architectural
model to address the security requirements for cloud. Stay tuned and keep those comments and inputs coming.
While I’m writing this blog, the Ministers of Tamil Nadu and
Kerala are having a meeting
with Prime Minister to discuss the contentious issue of Mullaperiayar at length.
For those who don’t know about this issue, this is about the Mullaperiayar Dam in
Mullaperiyar Dam is a masonry gravity dam over River Periyar and operated
bythe Government of Tamil Nadu based on
a 999-year lease agreement. The catchment areas and river basin of River
Periyar downstream include five Districts of Central Kerala, namely Idukki,
Kottayam, Ernakulam, Alappuzha and Trissur with a total population of around
This dam is at the centre stage again in the wake of reports that
the dam is weakening due to increase in incidents of tremor in Idduki district
in Kerala. Ministers from Kerala are seeking Central Government intervention in
ensuring the safety of the dam. At the same time, Tamil Nadu is insisting on
increasing the water level in the reservoir for enhancing water supply to the
state. While Tamil Nadu wants to increase the water-level in the reservoir,
Kerala has been insisting that it be reduced from the current 136 feet to 120
Currently I don’t think we have clear metrics on the exact usage
of water by each state, what is right level of water to be retained by the dam,
what are the risks etc. We have been relying on data that we have from the
However you look at it -- whether too much or not enough,
the world needs a smarter way to think about water. We need to look at the
subject holistically with all the other considerations as well. We use water
for more than drinking. We need to make an inventory of how much water we get
and how is it used – of industries, irrigation, etc.
This is where I think we need smarter ways to manage the water in the best possible way that addresses both states
Smarter Water Management can help us think in a smarter way about water. For
instance IBM is helping
the Beacon Institute to do source-to-sea real-time monitoring network for New York’s Hudson
and St. Lawrence Rivers as well as report on conditions and threats in real
time. There are many other case studies across the globe on IBM Smarter Water
Those interested in the problem and the possible solutions should
definitely read IBM’s broader outlook on Water Management as covered in the Global Innovation Outlook.
for Tomorrow is another interesting partnership between IBM and The Nature
Conservancy. IBM is providing a state-of-the-art support system for a free,
online application that will provide easy access to data and computer models to
help watershed managers assess how land use affects water quality.
Though it's a worldwide entity, water is treated as a regional
issue. I think we should try putting technology to use to solve our water problems.
The solution should be more instrumented, interconnected and intelligent system
that can not only take into consideration the realtime monitoring of the river
but also include early warning systems to notify risks related to earth quakes
etc. IBM’s Strategic
Water Management Solutions include offerings to help governments, water
utilities, and companies monitor and manage water more effectively. The IBM
Strategic Water Information Management (SWIM) solutions platform is both an
information architecture and an intelligent infrastructure that enables
continuous automated sensing, monitoring, and decision support for water
you might be wondering what has this to do with Cloud and why is this post on
cloud computing Central. For these solutions and platforms to be successful it
is highly important that we have energy efficient high-performance computing
platforms and complex sensor, metering, and actuator networks. Such platform
needs and flexible choices of having the solution on-premise as well as
leverage different delivery models can only be supported through a cloud.
I think we should just leverage these solutions on the cloud to
solve this issue and keep all the states and its people happy :-).
As we discussed in the previous post, it is important that the all the
processes work together to bring successful automation in the cloud management
platform.A process workflow automation
engine is what makes this possible. In this chapter we will discuss more about Tivoli process automation
engine that’s form the base for IBM process automation in the cloud space.
process automation engine provides a user interface, configuration services, workflows and the common data system needed
for IBM Service Management products and other services. As we already know IBM
Service Management (ISM) is a comprehensive and integrated approach for
Service Management, integrating technology, information, processes, and people
to deliver service excellence and operational efficiency and effectiveness for
traditional enterprises, service providers, and mid-size companies. Tivoli process automation engine, previously known as Tivoli base services, provides
the base infrastructure for applications like Tivoli Maximo Asset Management,
Change and Configuration Manager Database (CCMDB), Tivoli Service Request
Manager (SRM), Tivoli Asset Management for IT (TAMIT), Tivoli Proivisioning
Manager as well as Tivoli Service Automation Manager. Any product that has the Tivoli process automation engine as its foundation can be
installed with any other product that has the Tivoli process automation engine.
Management that integrates and automates IT management processes
Management that integrates people, processes, information and technology
for real business results
Management to automate tasks to address application or business service
operational management challenges
Through having a common process automation engine, the
we can successfully link Operational and Business services with Infrastructure
through a single (J2EE) platform. We can also leverage current investments
through linking this engine with existing process automation technologies and
products. So by building a unified platform
to automate processes, we have taken data integration to the next level where sharing
data between applications has never been easier.This integrated process automation platform can
support the repeatable IT functions like Incident Management, Problem
Management, Change Management, Configuration Management all the way through to
Release Management. All of these processes tie into the CMDB where they share
consistent data via bidirectional integration. The platform supports best
practices such as ITIL and other Industry best practices. This facilitates an automated approach across
the IT management lifecycle. It's also forms the basis for automating
repetitive tasks that can be handled by the system instead of requiring human (costly)
intervention. TPAE through the adapters provide data federation from multiple
sources that you already have and translating the information into usable data
that can be leveraged by internal process and workflow.
Figure 1 Tivoli
process automation integrated portfolio
A cloud is not a cloud if it is not elastic. The elastic
property of the cloud to expand and shrink based on demand is possible only
with a proper capacity planning. I feel the most difficult exercise to do while
making a cloud solution is capacity planning for your cloud.By this, I mean you have to size
managed environment as well as
Most of the engagements that I’ve walked into might have
some capacity or infrastructure that they want us to leverage and use it in the
cloud.So the comparison becomes
difficult if you don’t have a standard measuring unit for your infrastructure –
for instance how do you know a Quadcore
on an intel platform compares to power7 core. So I found a good explanation in
this guide, in this interesting article –
The answer to the difficult question was to use something
called the cloud CPU unit which is
nothing but the computing power equal to the processing power on a one
gigahertz CPU. When a user requests two CPUs, for example, they will get the
processing power of two 1 GHz CPUs. This means that a system with two CPUs,
each with four cores, running at 3 GHz will have the equivalent of 24 CPU units
(2CPUs x 4Cores x 3GHz = 24CPU Units).
The other dimension of the complexity is to determine the
resource needs and do the trends and forecasting. I typically collect the
projections from the clients and then put down some critical assumptions to
determine how big my cloud should be. Some critical questions that I typically
many concurrent users and peak users and what percentage of these users
needs to be covered?
type of workloads they typically run – development, test ?
image attributes – mem, cpu, storage etc
infrastructure planner for cloud made life easy for me that had a user
friendly interface to take me through these steps and arrive at a sizing for
the managed environment. Once we know
the managed environment, we can make
the sizing of the management platform. The details of how to plan the managed
environment, I’ll discuss in my next post.
I’ll be interested in putting together the top 10 parameters
that are critical for sizing the cloud managed and management environment. Look forward to your comments.
of the important things to decide when you discuss Cloud Service Strategy and
Design is the consideration for a Reference Architecture. This is something that is useful to align to
as it represents the blueprint for your cloud and make the implementation risk
free.The Cloud Computing Reference
Architecture (RA) is intended to be used as a blueprint / guide for
architecting cloud implementations, driven by functional and non-functional
requirements of the respective cloud implementation. The RA defines the basic
building blocks - architectural elements and their relationships which make up
the cloud. The RA also defines the basic principles which are fundamental for
delivering & managing cloud services.
architecture is more than just a collection of technologies and products. They
consist of several architectural models and are much like a city plan.The RA defines how your cloud platform should
be constructed so that it can satisfy not you’re your current demands and but
also be extensible to support the future needs of a diverse user population. So
this blueprint should be responsive to changing business and technology
requirements and adaptable to emerging technologies. Existing “legacy” products and
technologies as well as new cloud technologies can be mapped on the AOD to show
integration points amongst the new cloud technologies and integration points
between the cloud technologies and already existing ones. By delivering best practices in a standardized,
methodical way, an RA ensures consistency and quality across development and
IBM Cloud Computing RA is structured in a modular fashion with each functional capability
(architectural elements), the user roles (that we discussed in Chapter 12) and
their corresponding interactions. The IBM CCRA is created based on several
cloud engagements and incorporates all the good practices and methods
implemented across these projects. So for an end user adopting these good
practices the risk and cost of implementation of their cloud will be low. The
CC RA is built on the ELEG ( Efficiency, Lightweightness, Economies-of-scale,
of the principles that I want to highlight here is the Genericity Principle –
That’s the capability to define and manage generically along the Lifecycle of
Cloud Services: Be generic across I/P/S/BPaaS & provide ‘exploitation’
mechanism to support various cloud services using a shared, common management
platform (“Genericity”).As we know or
discussed in the cloud delivery and deployment models (Chapter 3) there can
many models for deployment and delivery of a Cloud Services. As we know Cloud
Service can represent any type of (IT) capability which is provided by the Cloud
Service Provider to Cloud Service Consumers - Infrastructure, Platform,
Software or Business Process Services. The beauty and significance of the IBM
Cloud Computing Reference Architecture is that it can cater to any of these
service delivery and deployment models. So if you are building your private
cloud or public cloud or using cloud to deliver IAAS, PAAS or SAAS the RA
remains the same and handle all of these combinations. We have seen the
capabilities that we need (Chapter 6) for implementing a common cloud
has recently submitted
Cloud Computing Reference Architecture 2.0 (CC RA) (.doc) to the Cloud
Architecture Project of the Open Group,
a document based on “real-world input from many cloud implementations across
IBM” meant to provide guidelines for creating a cloud environment. Check
out this link
which has the interview with Heather Kreger, one of the authors of Cloud Computing
Reference Architecture as well as the details of the components that make up
the topic there is also an article that I found on syscon cloud computing
journal which is comparing the Reference Architecture of the Big Three (
IBM, HP and Microsoft)which is an
before we get into the details of the Service Implementation / Transition phase
it is important that we understand the bigger picture. The word document IBM
Cloud Computing Reference Architecture 2.0 (CC RA) (.doc) provides a great
description of this bigger picture and going into the details as required. The
architectural principles define the fundamental principles which need to be
followed when realizing a cloud across all implementation stages (architecture,
design, and implementation). This is a must read for all - development teams
implementing the cloud delivery & management capabilities as well as
practitioners implementing private clouds for customers.
Chapter 6 - Multiple Entry Points to Deploy and manage Cloud Based Services
Cloud Service Management capabilities are needed to
enable visibility, control and automation of cloud services. IBM provides the
following open standards based integrated capabilities to implement service
management for the cloud.
hardware, software and services optimized for cloud
If you are looking for A la carte software offering/solution for maximum flexibility, you
start with IBM Tivoli Service Automation
Manager.This flexible solution
supports user driven service requests and automated resource deployment. The key capabilities
Self service User Interface for
Service Requests for improved responsiveness and efficiency
Workflow support to manage the
process for approval of usage
Provisioning – Automatesprovisioning of resources / IT resource
deploymentfor efficient operations
and to address fluctuating business requirements
with existing hardware to
leverage available resources and previous investment
Delivery Manager (ISDM) is a new offering which is pre-configured
management solution optimized for managing virtual environments and cloud
deployments.Like Tivoli Service
Automation Manager this again is also“software only” offering.In addition to the IBM Tivoli Service
Automation Manager features ISDM includes the additional capabilities
Pre-integrated solution, delivered
as virtual images for faster installation and time to value.
Monitoring to provide Visibility
of Performance of Virtual Machines
Usage and Accounting tracking for
Server ready for High Availability
Energy Management for tracking and
optimizing operational costs
IBM CloudBurst compared
to Tivoli Service Automation Manager and ISDM not only has the software
solution optimized for cloud but also ships the integrated hardware. In
addition to what was provided by its sibling offerings, IBM Cloudburst provides
the following capabilities.
Self-contained solution (managed from
and to environment) to accelerate cloud deployments
Pre-integrated solution bundled with
HW, SW, storage, network and QuickStart services for fastest time to
Thus the three offerings are designed for specific
purposes and selecting the right solution is based on the requirement. You can pick
from the following list and depending on what all you need, it is easy to
select the solution that meets those requirements.
Automation and Provisioning
Storage, Network Hardware
Quite often people are interested to know about IBM WebSphere
CloudBurst and how it is different from the three discussed above. While IBM
CloudBurst and WebSphere CloudBurst are both appliances that accelerate
time-to-value and reduce costs they are designed for two distinct purposes.
CloudBurst is a general-purpose cloud solution. It enables users to
virtualize, deploy, manage, and monitor highly heterogeneous workloads in
their private cloud. IBM CloudBurst is a pre-packaged cloud with
integrated blades, storage, network switches, and software management
CloudBurst is purpose-built to enable users to create, deploy, and manage
private clouds created from IBM Hypervisor Edition images and patterns. IBM
WebSphere CloudBurst delivers specialized WebSphere knowledge in the form
of pre-configured, optimized WebSphere patterns and images. WebSphere
CloudBurst is a cloud management device: 1U appliance that manages a private or on-premise
cloud. It requires supporting infrastructure (hypervisors, storage, and
networking) and virtual images.
Their integration augments the value of each offering
with IBM CloudBurst enabling end-to-end service request governance for
WebSphere CloudBurst provisioning and users still able to leverage a single portal
for cloud service requests forrapid and
optimized provisioning of virtualized WebSphere systems
IT Service Management is the integrated management of the people,
processes, technologies and information required to ensure the cost and quality
of IT services valued by the customer. IT Service Management (ITSM) is the
design, creation, implementation, execution and ongoing management of the IT
environment and services that meet the needs of the business and consumers.It includes:
·Management of IT as a business
·Design, implementation, and deployment of IT services
·Delivery of services to IT customers at
agreed-to levels of service and price
·Optimization of services through Service
Lifecycle Management & Continual Service Improvement
Service Management is at the
heart of the Cloud. Research shows on an average, 81% of cloud payback is
driven by labor savings enabled by service management. As discussed in the
previous chapter, Cloud Computing provides IT departments of enterprises an opportunity
to move towards a service driven management model. The same engineering
discipline that rationalized factory floors and production can be applied to IT
services. Cloud computing provides technical foundations enabling reengineering
of IT service model.But the goals for
service management remains the same the way it is applied for traditional IT.
The key objective of the service management system is to provide the
visibility, control and automation needed for efficient cloud delivery in both
public and private implementations.
The ability to see everything that’s going on
across the infrastructure. This
includes the visibility to services; enable end users to request
services through a self enablement portal
The ability to keep the infrastructure in its
desired state by enforcing policies.Control enables the
fulfillment of user requests based on best practices for request types
& conformance to organizational processes
The ability to manage huge and growing
infrastructures while controlling cost and quality. Automation of service delivery
includes automating user requests and operational tasks to improve
efficiency and effectiveness
ITIL is one of the
foundations for service management best practices.A key element of ITIL is the service lifecycle
and the need for best practice processes throughout the life of a service.ITIL Service Lifecycle Modules are:
Service Improvement (CSI)
Cloud services also have a lifecycle that maps to the
ITIL service management lifecycle. In the Cloud context, Service Management
controls an efficient implementation of new services, integration with the
existing portfolio and lifecycle management of standardized IT services. For
instance Cloud Computing will become a relevant topic in your Service Strategy.You need to see how to leverage integration of
Cloud and traditional IT services during the Service Design. For Service
Operation you need a automated way to deploy your cloud services – an automated
provisioning and image management. For Continual Service Improvement (CSI) it
requires the capability for managing, monitoring, securing and metering your
When discussing IT Service Lifecycle management it is
good to discuss the standardization step as well. Standardization helps improve overall operations. The more you can
standardize the more you can reduce operating expense such as labor and
downtime – which is the fastest growing portion of IT expenditures. Tivoli
Service Automation manager takes care of Standardization and best practices in
all the steps of Service Lifecycle with the capabilities discussed below.
Design and Transition
a Service Template Definition
to build service and management plans for Service
Service Offering Creation
& Registration – a way to define Service based on Template and register
the same in the Catalog.
Service Offering Subscription
& Instantiation – Provides a way users can select the service, specify
parameters and SLAs.
The ability to automatically
instantiate the Service.
for autonomic execution of management plans leveraging Automation and
DestroyService and free up resources based on Service
Instance Termination requests
These capabilities of providing visibility, control and
automation across the business and IT infrastructure results in the following key
Integrated processes across the business
more reliable service delivery
efficiency and staff productivity
operational risk and exposure
We will discuss in detail how you could use IBM
Cloudburst, IBM Service Delivery Manager and Tivoli Service Automation Manager
for each of these steps in the lifecycle. If you are developer, the following
chapters will help you understand the technologies and skills needed to do the
services design, automation and management.
For the enterprises, the most attractive factor of cloud is its flexible sourcing options and the
choices of deployment. And again the different deployment and delivery models can co-exist and it is
possible to integrate with traditional IT systems and with other clouds.
Cloud Delivery Models
The key delivery models for cloud are discussed below.
Cloud refers to IT
capabilities are provided “as a service,” over an intranet, within the enterprise
and behind the firewall. Privately owned and managed. The access limited to
client and its partner network. The Private cloud drives efficiency,
standardization and best practices while retaining greater customization and
control within the organization. In a private cloud environment, all resources
are local and dedicated.All cloud
management is local.
Figure 1 Private Cloud
Public Cloud refers to IT activities / functions are
provided “as a service,” over the Internet Service provider owned and managed.
In public cloud, access is by subscription.
The public cloud delivers select set of standardized
business process, application and/or infrastructure services on a flexible
price per use basis.Multiple tenancy is a key characteristic of public
Figure 2 Public Cloud
Hybrid cloud is a combination of characteristics of
both public and private cloud where internal and external service delivery methods
are integrated. For example in the case of an Off-Premise Private Cloud, resources
are dedicated, but off-premise.Enterprise administrator
can manage the service catalog and policies.Cloud provider operates and manages the cloud infrastructure and
Figure 3 Off-Premise Private Cloud
Community cloud – This is the model where the cloud
infrastructure is shared by several organizations and supports a specific
community that has shared concerns (e.g., mission, security requirements,
policy, and compliance considerations). It may be managed by the organizations
or a third party and may exist on premise or off premise.
Public vs. Private
Overall private clouds have higher levels of consideration
than public clouds with most of the enterprises but there are various other
models that are emerging.
Figure 4 Cloud Delivery Models
We need to
balance the business benefits of increased speed and lower cost with public
cloud offerings versus the security and ownership of infrastructure and service
management considerations while choosing between a public and private cloud
offering for a capability. The governance model, resiliency, level and source
of support, architectural & management control, compliance, customization /
specialization etc are other considerations.
Public and Private Clouds are preferred for different
workloads. Many enterprises still prefer to host their traditional applications
out of their private cloud. The top private workloads include
Data mining, text mining, or
Data warehouses or data marts
Business continuity and
As and when
a workload becomes more standard and the SLAs are well established, the same
service becomes easy to consume over a public cloud.This is similar to how you can access well
defined banking functions through ATMs. Only when you need some special
services you go to your bank these days.Similarly top public workloads include
Service help desk
Infrastructure for training and
WAN capacity, VOIP
Test environment infrastructure
Data Centre network capacity
Cloud Deployment Models
All the computing related functions that clouds provide are
accessed through a service catalog and delivered as integrated services. The
different layers of IT-as-a-Service are referred to as the Cloud Deployment
Models. More details of these definitions can be found at the NIST website which
is source for some of the text below.
Figure 5 Cloud Deployment Models
Infrastructure as a Service (IaaS) is the service
delivery model where customers use processing (server), storage, networks and other
computing resources/ data center functionality.Iaas has the ability to rapidly and elastically provision and control resources.
In this model customers can deploy and run software and services without the
need to manage or control the underlying resources. The IBM Research Compute
Cloud (RC2) is an example for this model. Smart
Business Desktop on the IBM Cloud is another example for IaaS that enables
desktop virtualization with a subscription service with no upfront fees or
capital expense. Consider reading about IBM
Cloudburst if you are building your own IaaS platform.
Platform as a Service (PaaS) is the delivery model
where customers can use programming languages, tools and platforms to develop
and deploy applications on multi-tenant shared infrastructure with ability the to
control deployed applications and environments. All of these again can be done without
the need to manage or control the underlying resources. IBM BPM BlueWorks provides
tools to build your own business process. WebSphere
Cloudburst is also something for you to look at if you building a PaaS
Software as a Service (SaaS) is the popular model
where customers use applications (Eg, CRM, ERP, E-mail) from multiple client
devices through a Web browser on multi-tenant and shared infrastructure without
the need to manage or control the underlying resources. An example of this
model is IBM lotuslive.
Business Process as a Service (BPaaS) is an emerging
model where customers can consume business outcomes (Eg, payroll processing,
HR) by accessing business services via Web-centric interfaces on multi-tenant
and shared infrastructures.Smart Business Expense Reporting on the IBM
Cloud is one of the offerings in this category.
As part of the first two parts of this series we have tried
to define the term “cloud computing”.Having understood what it is, let us now try to look at how and cloud
computing is gaining importance now.
As the world is becoming more interconnected, infrastructure
needs to become dynamic to bring together business and IT. Growth of
instrumentation, interconnection and intelligence in the world is driving the
emergence of IT and business services and the requirement for service
management systems. To create such a
dynamic infrastructure, the customers (businesses) are looking for following
have to worry about the full IT capacity they need at peak time.
only for what they actually use. They do not have to buy servers or
capacity for maximum use. i.e. Move to a reduced Capex (Capital Expense) model
with leveraging the economies of Opex (operating expense) for IT
allocation and de-allocation of resources or semi-automatically on demand
If you research on how the business can address or acquire
the above capabilities, cloud computing seems to be holding to the key answers
to the above considerations. An effective Cloud Computing deployment is built
on a Dynamic Infrastructure and is highly optimized to achieve more with less leveraging
virtualization, standardization and automation to free up budget for new
Computing is a new IT consumption and delivery model for businesses that makes
the above capabilities a reality.
AConsumption model: new user
experience and a business model
Standardized SERVICES offerings
Ease of access
Computing and Delivery model:
Integrated Service Management
Progression toward transformation starts with optimizing
existing assets/processes and leverages best in class technology at transitions.
Each step balances improvements in efficiency and effectiveness and can be measured
by business returns. However, an organization can move to cloud systematically
taking one step at a time, or they can move right to a cloud deployment if it
aligns best with their strategic vision for the business.
Readying the infrastructure requires the implementation of a
Dynamic Infrastructure:consolidate your
servers and storage, implement virtualization technologies to increase
utilization, standardizing your processes for operational efficiency, automating
procedures for a more flexible delivery and enabling clients for
self-service.Then you identify common
workloads and set up shared resources, and finally, to achieve a true
cloud-enabled environment, clients must be able to provision the workloads in a
to cloud consumption and delivery model is like a big transformation effort. So
before taking this long journey, it is important to understand the typical use
cases, workloads that you can move to cloud and the associated ROI.
Cloud Business Use Cases
One of the
earliest groups to take a step towards identifying some of these use cases is
Computing Use Cases Workgroup on google groups. This collaborative
effort of cloud consumers and cloud vendors has put out a white paper that
discusses some of the basic definitions. The paper further discusses the
various Use Case Scenarios from a Delivery and Deployment model perspective. The
white paper is in its fifth iteration were the group members are now discussing
what and how about “moving to the cloud”. The current version of the paper can
be found here.
effort on the subject on use cases from a business perspective is “Strengthening your
Business Case for Using Cloud” whitepaper from the open group.I was also one of the key contributors to this
effort. This White Paper incorporates a unique collection of Cloud business use
cases, findings, and conclusions that can help executives and business process
owners make the appropriate Cloud investment decisions. By describing
real-world granular business problems, requirements, and analysis of the value
and business implications of Cloud computing, reading this paper will equip you
with the necessary business insights to justify your path for using Cloud.
consideration is that the adoption of cloud computing will be workload
The delivery model (public, private or hybrid) selection
depends on the workload. The research studies by IBM indicate that the
different types of workloads that could be delivered internal with a private
cloud or on a fully shared environment on a public cloud are the following.
Database- and application-oriented workloads emerge as most
appropriate for private workloads where as Infrastructure workloads emerge as most
appropriate for the public cloud.
Most customers want to start with something under their
control and behind their firewalls.So
the tremendous interest today among businesses is for private clouds – in both
Large Enterprises and the Mid-market.There is also great interest in public cloud
services – especially with smaller clients for infrastructure services. As businesses
become more comfortable moving workloads to public clouds more domain applications
will become available on the cloud. This will also result in a proliferation of
hybrid clouds as businesses integrate their private cloud environments with
public cloud services.
Benefits of Cloud Computing
The analysis of these use cases as well as what is discussed
in the open group whitepaper, point to the following benefits of using Cloud
to dynamically source and consume IT services
(infrastructure, platforms, software, and business services) on a demand
use basis – an instantly secure and managed service provisioning process
to move/abstract the service complexity off-premise to provide more
efficient availability, resilience, and security patching
agility, ability to adjust to business requirements and market
forces on demand
risk management through improved business resiliency
pricing model, eliminating cost of excess capacity
and flexible service for users, enabling self-service
requests and delivering services more rapidly, with fewer errors, and
based on requested qualities of service or SLAs
time to marketand acceleration of innovation projects
costs, both capital and operational expenditures
up skilled resources to focus on high value work and innovation
Significantly improve energy
efficiency and reduce idle time
Cloud Deployment and Delivery Models
There are multiple delivery and deployment models that cloud
computing supports to deliver the promised capabilities. This choice and
flexibility of having different deployment delivery models is the key to
success of Cloud Computing platform. The cloud flexible delivery models include
Standard Cloud service types are emerging and guiding the IT
Industry development. The different deployment models are
as a Service (IaaS)
as a Service (PaaS)
as a Service (SaaS)
Process as a Service (BPaaS)
multiple deployment and delivery models can co-exist and it is possible to integrate
with traditional IT systems and with other clouds.We will discuss them in detail in the
Let’s start the first module with trying to understand and
define the term Cloud Computing in its details.It is comprised of two words – Cloud and Computing.So simply put it is computing that you can
offer on the cloud.What’s the Cloud
referred here? The term "cloud" is used as a metaphor for the
Internet, based on the cloud drawing used in the past to represent the network.The computing could be any goal-oriented
activity requiring, benefiting from the usage of Information Technology that
includes hardware and software systems used for a wide range of purposes;
processing, structuring, and managing various kinds of information;
There are several definitions that you can find on the web
for cloud computing.
National Institute of Standards and Technology (NIST),
Information Technology Laboratory has been promoting the effective and secure
use of cloud computing technology within government and industry by providing
technical guidance and promoting standards.
Definition - Cloud computing is a pay-per-use model for enabling
available, convenient, on-demand network access to a shared pool of
configurable computing resources (e.g., networks, servers, storage,
applications, services) that can be rapidly provisioned and released with
minimal management effort or service provider interaction.
- Cloud computing is Internet-based computing, whereby shared resources,
software, and information are provided to computers and other devices on
demand, like the electricity grid.
Internet-based computing was always available. So what’s
different now?The different is Cloud
computing is a paradigm shift. Cloud computing is a new consumption and
delivery model inspired by consumer internet services. Cloud computing is still
an evolving paradigm. But in general most of the companies involved with cloud
have agreed on certain general characteristics or essentials that qualify any
internet-based computing to be referred to as a cloud. They are the following
On-demand self-service - A consumer can unilaterally
provision computing capabilities, such as server time and network storage, as needed
without requiring human interaction with each service’s provider.
Ubiquitous network access - Capabilities are
available over the network and accessed through standard mechanisms that
promote use by heterogeneous thin or thick client platforms (e.g., mobile
phones, laptops, and PDAs).
Location independent resource pooling - The
provider’s computing resources are pooled to serve all consumers using a
multi-tenant model, with different physical and virtual resources dynamically
assigned and reassigned according to consumer demand. The customer generally
has no control or knowledge over the exact location of the provided resources.
Examples of resources include storage, processing, memory, network bandwidth,
and virtual machines.
Rapid elasticity - Capabilities can be rapidly and
elastically provisioned to quickly scale up and rapidly released to quickly
scale down. To the consumer, the capabilities available for rent often appear
to be infinite and can be purchased in any quantity at any time.
Pay per use - Capabilities are charged using a
metered, fee-for-service, or advertising based billing model to promote
optimization of resource use. Examples are measuring the storage, bandwidth,
and computing resources consumed and charging for the number of active user
accounts per month. Clouds within an organization accrue cost between business
units and may or may not use actual currency.
The intent of this blog is not to duplicate the content from
other web sites into this article. But provide a means to navigate through a
variety of resources that are available and take a structured approach to
understanding the term.Once we have
understood this basic definition, let’s look at other resources for further
·Is Cloud Computing same as
·Where can I learn more about Cloud
·What types of application can run
in the Cloud?
Computing Primer -Part 1 – This
white paper recommended as one of the resources for the Cloud Computing
Certification discusses the definition in detail. Beyond the definition, it
discusses the cloud computing context and how is it different from current
hosted services. Virtualization plays a key role for meeting some of the
characteristics of cloud like Elasticity and Scalability, Workload Migration
and Resiliency. This article discusses Virtualization and its effect on cloud
is computing. The article further tries to burst some common myths about cloud
computing should satisfy all the requirements specified: scalability, on
demand, pay per use, resilience, multitenancy, and workload migration.
Cloud computing is useful only if you are outsourcing your IT functions to
an external service provider.
computing requires virtualization
computing requires you to expose your data to the outside world.
networks are essential to cloud computing
To get an overview best is to start with these excellent 3 to 4 minute videos onintroduction to the basics of cloud computing
from common craft and rPath – Cloud Computing in
Plain English and Cloud
Computing Plain and Simple. Cloud
Computing Explained is another simple video that explains Cloud Computing
in a way that everyone can understand! You can find many videos on Youtube if you search for cloud
computing. But the best that I liked is this one where a Dad is explaining Cow
computing – I mean Cloud Computing to his daughter. Check it out.
share is another good place where I found there are some very interesting
presentations on cloud.
We had our first meeting of the IBM Cloud Certification Study Group yesterday.The objective of the study group is to pass the IBM Certified Solution Advisor Cloud Computing Architecture V1 certification exam I wanted to thank all the group members who attended and shared their ideas on how to study for the certification exam. We had groups members participate from all over the globe, from Sweden, India, North America and Australia. If you couldn't make it ,have no worries we'll arrange another meeting in a couple of weeks time. Please feel free to join us.
During our meeting we decided on a strategy of " Divide and Conquer" in our approach to studying for the exam. By this I mean, take advantage of each individuals strengths and share it with the group. One group member might be well versed on Cloud Security and another might be proficient on SaaS. The idea is to get together and share our knowledge.
During our meeting we covered the following:
Key areas of competency for the Cloud Solution Advisor certification
We've recorded our first session and if you'd like to watch the replay, it can be viewed here. PDF presentation files of the meeting are located here. We've also posted a couple of activities to complete prior to our second meeting. Those are located under the activities section of the group. If you'd like to be notified when we add additional activities let me know and I'll add you to the list.
I'm really looking forward to working with the study group and ultimately becoming a IBM Certified Solution Advisor too.
Electronic signatures are a robust method of verifying the integrity of an electronic document. This is the digital counterpart of putting your signature on the dotted line. With the majority of organizations making a shift from paperwork to electronically managed records, the concept of signing these documents electronically is a no-brainer. This need is even more compounded in cases where those who sign on the document and those who need to verify the document work remotely. It doesn’t help that the general public perception of eSignatures is poor as compared to their physical counterpart. A recently published paper in the Journal of Experimental Social Psychology looks into the trust issues people have with eSignatures. Another paper explores the indirect side-effects eSignatures have on individual honesty and integrity. Properly implemented eSignatures are in fact very secure and resistant to tampering. This is supported by the fact that online contract signing is going manstream.
There are two regulatory acts that provide the baseline for eSignature security compliance standards for various implementations around the world; these are the ESIGN Act for the US and eIDAS for the European Union. eIDAS identifies three types of eSignatures: basic, advanced (AES), qualified (QES).
This type of signature involves the signatory putting their signature mark on the document (typed or drawn) and then protecting it with a cryptographic signature. This “witness” cryptographic signature binds the signature marking to the document. Any unauthorized changes are thus not allowed on that document. This ensures that the person putting in the signature is actually the one who is supposed to sign. Making this accurate requires implementing authentication schemes that are a precursor to the document signing process. The key used to sign documents using basic eSignature scheme can either be a centralized one from a service provider or one from the organization itself.
“advanced electronic signature” means an electronic signature which meets the following requirements:
(a) it is uniquely linked to the signatory;
(b) it is capable of identifying the signatory;
(c) it is created using means that the signatory can maintain under his sole control; and
(d) it is linked to the data to which it relates in such a manner that any subsequent change of the data is detectable;”
Unlike the basic scheme where the same key was used to put a cryptographic signature on the document, AES requires that each signatory has their own unique key. The signatory’s identity must be established using the certificate provided to it by a trusted authority. The implementation should also be able to identify whether the document data has been tampered with and reject the signature in that case.
For the actual implementation, three standards are used: XAdES, PAdES and CAdES. XAdES (short for "XML Advanced Electronic Signatures") is based upon XML Signatures, which is a general purpose framework for digital signatures. CAdES (short for CMS Advanced Electronic Signatures) is based upon Cryptographic Message Syntax (CMS), which is another general purpose framework for digital signatures. PAdES (short for PDF Advanced Electronic Signatures) is one of the most popular standards. This standard defines a set of restrictions for PDF document format.
When a digital document is presented to a system, the reviewer validates that the document has not been tampered with, and makes sure that it is signed by a certificate that they trust. That certificate in turn should be trusted by another trusted certificate and so on in the chain, until we reach the trusted root - which is a certificate that is verified to be of legitimate origins and is already stored with the reviewer system. This is very similar to how web browsers validate a website’s certificate.
But what if some organization discovers that their identity has been compromised long ago and they should not trust the documents signed after that? That would require revoking the certificate. So when the reviewer is looking to verify the document integrity, they must know that the certificate they have been provided is no longer valid. This is done by using Certificate Revocation Lists (CRL), which is a list that is published periodically by the certificate issuing authority or by using OSCP, which is a protocol to obtain the certificate revocation status in real time. These methods work well but require network connectivity to work. A solution to this problem is Long-Term Validation (LTV). In an LTV scheme the required elements are embedded in the document itself, so the reviewer can verify the signature later on. One benefit of PAdES is that it supports LTV.
Qualified Electronic Signatures (QES)
QES are a more trusted version of AES. It involves a formal registration process for the signatory to verify their identity before a qualified certificate issuing authority.
eSignatures are gaining more mainstream acceptance than ever before. The combination of ease of use and security makes this technology very promising. The market is still learning to adopt it. We will see more development in this space in the coming years especially in relation to smart contracts.
This is just to deploy a test OpenStack instance to try it, perform demonstrations or to learn how to deploy cloud instances. In short, I decided to have a test OpenStack instance running on my old laptop and if possible have a booking page that lets users request access to this OpenStack setup via Horizon or SSH. I ended up creating this page for booking it - http://tryopenstack.cloudaccess.host, and I have a separate URL for accessing OpenStack: http://tryopenstack.dlinkddns.com (If this URL doesn’t work, it’s because I’ve shut down my laptop).
Installing the Operating System:
Which Operating System do you need? I setup both DevStack and RDO (RPM Distribution of OpenStack). These are two popular free OpenStack distributions. At the time of writing this, Ubuntu 16.04 is great for setting up DevStack. It is geared towards OpenStack developers having an environment to develop and test. However, after having installed the latest release of DevStack, I decided to try out RDO. RDO is deployed using Packstack which is a tool that uses Puppet, an open source configuration management tool, to deploy the various components that make up OpenStack.
As of writing this, DevStack’s latest package sets up OpenStack Pike (which can be confirmed from the nova version – 16.0.0) while the latest stable release of RDO contained OpenStack Ocata.
To setup RDO, CentOS works great, however Fedora may work as well. CentOS is a free Linux distribution based on RedHat Enterprise Linux. The version I installed was the latest minimal version of the server pulled from here: https://www.centos.org/download/. I got the “Everything ISO” version of the installer which has the option to setup the Minimal version.
My laptop has a wireless port, and I didn’t want to physically plug the Ethernet port for convenience reasons. This presents a new problem. Packstack setup recommends that you have NetworkManager disabled on the Operating System. This means DHCP is out of the question. It’s also not recommended to have DHCP on the port while running OpenStack. I initially solved this using a specific IP address corresponding to the MAC address of my wireless port. In a way, it’s static, but it’s still using DHCP to set the same IP address to my wireless port every time. So, I started experimenting with several static IP configurations on my wireless port. Eventually, I was able to figure out that I had to authenticate to my wireless router manually.
To do this, I generated a Hex Code for my SSID and password combination. To do that, use any free WPA PSK generator. Then, I created a wpa_supplicant configuration file called “wpa_supplicant.conf” with the following information.
Then use the following command to authenticate to the wireless router.
As you may have guessed, wlp4s0 is my wireless port. Now, I had to make this persistent on boot, so add the line above to the /etc/rc.d/rc.local file and set the right permissions for this file: chmod +x /etc/rc.d/rc.local.
At this stage, it’s best to reboot your laptop to make sure you can connect to the Internet through the wireless port.
Continue OpenStack setup using PackStack:
It’s not mentioned in the website, but it’s better to disable selinux or set it to permissive. It’s also recommended to have a fully qualified domain name (FQDN). Ensure you have properly set up the /etc/hosts and the /etc/resolv.conf files.
Then perform the following self-explanatory steps;
In my all-in-one laptop setup, contrary to what’s specified in the installation website, it’s best to generate an answer file which you can use to fine-tune what you actually set up.
This comes up with a file (called answer.txt) which contains all the parameters regarding the installation of OpenStack. I disabled certain OpenStack components like Swift (the Object Storage service) and telemetry (the metering service) as this is just a test setup on a laptop.
Then I used the following command to run the installation.
My laptop had around 300 GB of disk space, with 4 cores and 8 GB of RAM. The setup completed in about 30 minutes, and you must see a “Installation completed successfully” message.
It didn’t work the first time (Error: systemd start for rabbitmq server failed) and I asked a question on the Ask OpenStack forum. It hasn’t been answered by the time of this blog post, but I solved it by adding a search keyword with my domain name on my /etc/resolv.conf file and by adding my internal IP (wireless port’s IP) and my domain name in the /etc/hosts file. I will explain how to generate this domain name later. One thing to note is the way you know the installation didn’t succeed is by not getting the “Installation completed successfully” message. You will still be able to access the Horizon dashboard but several things that have to do with the RabbitMQ server will not work.
Once the installation is done, OpenStack sets up several virtual bridges, one for the internal network (your private home network) , one for the external network (the public network) and one as a tunnel. Here’s a screenshot to better explain this;
There are a couple of ways to handle this. The first one, the recommended I’d assume, is to setup the “br-ex” interface to have the IP address that was in the “wlp4s0” interface, and the make the wlp4s0 port, a port in the OVS bridge. If this configuration is needed for someone, I’ll post it here – but for the sake of brevity let me ignore this for now. In short, the wireless interface configuration would change to something like this;
The second way, is to use the default network configured by OpenStack on the external bridge – 172.24.4.0/24 as shown in the screenshot above. This network, as it’s a bridge, can still route traffic via the wireless port to the Internet by default, and the instances that you may deploy will have connectivity to the Internet – sort of like a tunnel (I have proof of this below). However, you would not have the ability to ping the deployed instances from your local wireless network laptops.
Once the installation is done, you can simply point your browser to your static IP address that youconfigured for the laptop, and you should be able to see the OpenStack dashboard. One of the good things I was surprised by, was a “material” theme for the OpenStack interface – inspired by Google’s material design - if you’re familiar with Android development.
I had a Cirros instance stuck in the “queued” state, and I was able to get rid of it (only) from the CLI, using the following command;
glance image-delete <image_ID>. This may have been because of the first failed message I received, as mentioned above.
The rest of the setup is very important. Next, you setup Neutron. I briefly mentioned which internal and external network I setup above. I can provide more details if anyone needs this. Then setup a virtual router to bridge the internal and the external networks using one gateway address. This is a screenshot of the topology from the “second way” I mentioned above.
Then, it’s best to setup your key pairs. You have to copy your public key on the Key Pairs section on Horizon. Most cloud instances need key pairs for authentication and your regular username/password method of authentication will not work.
Then, setup an image – several cloud-init enabled images are available on the Internet, that you can either download and upload to Horizon, or directly import via a URL. I decided to stick with the Cirros image – just for initial testing.
Horizon in the Ocata release has revamped the instance launching screen. It looks much better and is intuitive.
To cut the long story short, I deployed my first instance and it was successful. I also associated a floating IP on the test 172.24.4.0 network. The wireless port acted as the gateway for the internal network – 192.168.0.0 network.
What’s really surprising for a first-time viewer was how it was able to ping google.com in spite of not having the instance registered on my wireless router.
Setup a domain:
This step is quite important, and should be at the top of this page, as this was one of the first things I did. My cheap wireless router had a feature of creating DDNS addresses associated with the public IP (which may change on router reboot) provided by my ISP. So, I generated a URL for my public IP address and opened up the HTTP port (80) on the router. If this URL doesn’t work, it’s because I’ve shut down my laptop.
The idea is if you send me a request via this URL (http://tryopenstack.cloudaccess.host/) I’d be able to open my laptop, create a project, and user for you and let you access the resources. I also setup a booking page to charge for resources via Paypal.
Here’s a diagram I came up in a short time to explain the whole thing to a layman:
I wrote the whole thing in about 30 minutes, and I apologize for grammatical errors.
Things to try in the future:
Convert site to HTTPS, SSL
Try increasing the SWAP space and figure out if it’s possible to use more memory for deployed instances (OpenStack itself takes up about 6 GB of RAM)
Go back to the “first way” of Neutron setup - of using the wireless network as internal network for deployed instances, but this would require real floating IP addresses on the Internet for these instances
Try a multi-node setup
Implement a similar setup on test instances on a public/private cloud
The use of business intelligence and analytics to make decisions is on the rise in corporate America. According to a Gartner report, the the BI and analytics market is expected to grow to $20 billionby 2019. We are already at a stage where over 50% of the analysts and users in organizations have access to self-service tools for business intelligence. This is not surprising given recent reports that suggest that the use of such tools help businesses five times more likely to make faster decisions.
But it’s not just faster decision making that is driving the adoption of business intelligence tools. BI and analytics relies heavily on data to arrive at conclusions. Decisions based on data are likely to be a lot more predictable and trustworthy than those that are based off gut-feelings or consumer surveys.
Deploying business intelligence at work is however much more than simply installing a vendor software. BI tools are only as successful as you want them to be. The most successful businesses are those that see BI as a component of a larger process and culture change within the company. This change requires managers to establish processes that drive larger aggregation of data, processing them and actively pursuing insights from this data to arrive at decisions.
Identify your objectives
The first step towards a successful deployment of BI is understanding your business objectives. A cost reduction project, for instance, would require a system where capital and cash outflow data from all your various warehouses and distribution centers are available at a granular level. On the other hand, if your objective is revenue maximization, then your system will not only need data pertaining to the various SKUs in the market along with their sales and distribution numbers, but also similar data of your competitors. In other words, knowing your business objectives will tell you the kind of data you will need. This will help you establish a system that gathers this data. Without a working system, deploying a BI platform is meaningless.
Pick the right tool
The success of a BI deployment depends to a great extent on the software tools that you use. However, the best BI tool in the market may not be the right one for your business. Your evaluation should include various parameters like the cost of the tool, the size of business it is targeted at and the nature of deployment. According to this listof BI software tools, there are over 339 products available in the market today. While tools like Slemma BI and Grow BI are targeted at the price-sensitive customer, others like Rapid Insight focus on businesses that prefer Windows local installation. Pick a tool that meets not only all your feature needs, but also works within your budget and deployment requirements.
Effect a data-driven culture
Business intelligence is one of the best examples for the popular computing phrase, ‘Garbage in, Garbage out’. In other words, incorrect or insufficient input would bring about incorrect or insufficient output. The only way to break this cycle would be to drive a cultural change within the organization that focuses on gathering data at every level and bringing them together for the decision making process. This is not an easy thing to achieve, especially if you are an enterprise business with hundreds or thousands of employees. The lead, however, needs to come from the top and this push for data-driven decision making is extremely crucial to the success of a BI deployment.
Do you make use of business intelligence software at work? Share us your experience in the comments.
A software review is a process in which the members get together and discuss the different aspects that need to be incorporated in the upcoming products or software. Different sources aid in the review such as test specification, test plans, design documents etc. It basically serves the purpose of bridging the gap between the user requirements and what is actually manufactured. These reviews are performed on different levels that include formal inspection, pairprogramming, and informal walkthroughs.
A software review also fixes architectural errors, bugs, logic flaws and mistypes. The senior and junior engineers figure out the solution and this helps them to get to know the product in more detail.
Why is it important
The software reviews take place many times while the product is being developed. This helps all the members of the team to be on the same page and helps in maintaining consistency. This is crucial as the members work with different styles. Also, the company has many projects that need its attention. Getting the software checked by various associates helps in having a different perspective. The problems and bugs in the software can be easily corrected when these reviews are held. A much better and easier solution can be obtained for the same problem when many minds are at work.
There are different types of software reviews which are discussed below.
Software peer review is done at the developer level. The author takes help of his or her colleagues and fellow developers to test and figure out the technical content of the software. The Linus’s Law states “given enough eyeballs, the bugs are shallow” which loosely translates as having more number of reviewers makes tackling the problems easier. Such type of approach is usually seen in the open source reviews.
Thesoftware management review is done to study about the software’s usage of resources and the overall project status. The management review can be either informal or formal. A formal review follows specific steps. It first prepares and plans the structure of the review. Next, the analysis of the previous software peer reviews is done. In this, the individual preparation is checked and the group or team examination takes place. A follow-up process takes place after this and the formal management review is wrapped up.
Other aspects of software review
Following the software review at the development stage is efficient. The cost of fixing a problem at this level is much lower than fixing it post production during software testing. Also, there is always a downside of recalling the products after dispatching and launching them in the market. This is also considered as defect detection process.
Software reviews also help in training the developers to code minimum error documents. This method becomes more efficient through the years as the members learn to keep the errors at bay. This is the next level in the review which is known as defect prevention process and helps the company in the long run.
As a product is developed at the earliest, it tends to have a greater impact on the further projects. This helps in releasing the product early in the market and making greater profits.
Software Review has a pivotal role in the development of a product that is deployed by the company. It has many advantages, especially at the user end. One of the main aspects of software review is its testing. The cost of developing the software and the required hardware for it is very cheap. That’s why, for example, one can find smartphones that are cheap and expensive. What goes behind the higher brands’ flagship models is their rigorous testing and reviewing. The product is certified and widely accepted by the crowds due to its reviews. Also, a smartphone generally, has an average shelf life of 2 to 3 years. Advanced technology is incorporated at every stage in all the fields of electronics and software reviews happen to be really successful.
Big data has become the blood that pulses through our complex, modern society. But more than simple data acquisition, the challenge is leveraging so much information. From Amazon to the Food and Drug Administration, everyone is looking for ways to use data to boost productivity and improve customer support. And it’s only just begun. According to a Gartner report, more than 75% of companies are now investing or planning to invest in big data imminently.
As with most advancements in technology, necessity has been the mother of invention. The influx of data brought about by the big data revolution has created a real and serious need for technologies capable of analyzing and organizing all this informations. More important is translating this data into timely, actionable feedback and eventual ROI.
A number of technological solutions to the data overload problem have been explored. But after some trial and error, the graphics processing unit (GPU) has emerged as the front-runner. You've probably already heard of a GPU, which originally enabled your computer to deliver a faster, more enjoyable experience for gaming and watching videos. It didn’t take long for innovators to realize the same technology would also process more intense computations faster, and in a more cost-efficient way, than the CPU methods currently in use. In other words, using GPUs for big data analytics means any business can make better informed decisions and in real-time.
Realizing this potential, it was just a matter of writing a new programming language to allow direct interaction with the GPU. Now massive amounts of data could be systematically organized into usable chunks both actionable and precise. Some systems even offer users marketing suggestions and best practices advice.
Change doesn't happen overnight. Nevertheless, the GPU revolution is beginning to spread. In 2010, China’s Tianjin-based Supercomputing Center launched the Tianhe-1A, equipped with 14,336Xeon X5670 processors and 7,168Nvidia Tesla M2050 general purpose GPUs. It was , for a time, the fastest supercomputer on the planet. Since then, GPU-based supercomputers have gone into operation in the U.S., Russia, Switzerland, Italy, and Australia.
An exciting development has been the advent of affordable, software-optimized systems for the business world. A number of developers, Tel Aviv-based SQream, for example, use software to maximize the power efficiency of databases. The result is faster, more comprehensive analytics without the need for expensive supercomputers. From any perspective, this simply makes more sense for the average business than investing millions in hardware.
GPUs and Deep Learning
Where GPU technology really shines is deep learning. Simply put, deep learning is a type of machine learning that runs in a similar manner as the human brain. Deep learning enables computers and other machines to learn and adapt responses according to perceived (or input) behavioral patterns. While this was an option with standard CPU systems, GPUs allow machines to adapt faster and more precisely to new situations.
Big data is opening doors in ways no one would ever have believed just a few decades ago. GPU-based technologies are the keys to unlocking them. It’s an exciting time to be involved in the world of tech. One can only imagine what new developments will come. But without a doubt, big data will continue to change how business is done into the future.
Separate Reconciliation and Renderer
Fiber architecture splits the process of reconciliation of the DOM tree from the actual rendering step. This enables using different types of renderers. The reconciliation step is necessary to compute the difference between the DOM nodes at one point and another point after a state change in the application has happened. Keeping a decoupled render phase helps in using React in places like VR using React-VR project, hardware using React-Hardware project, web using the core ReactJS project, and native components in platforms like iOS and Android using the React-Native project - all the while sharing a common project base. ReactJS codebase is very convoluted at the moment and this makes it difficult for beginners to contribute to it. This rewrite aims to solve that issue as well.
Breaking Reconciliation into Two Phases
React Fiber breaks the reconciliation into two parts: evaluation and commit. The evaluation of changes to be made to the DOM tree is now asynchronous. React uses a virtual DOM to find out the difference current state and the next state efficiently and only propagates the final result to the real DOM. Earlier react used to move down the component tree and apply changes along its way. Now it waits for the browser to give it control using the requestIdleCallback API wherever it is supported and uses a polyfill to support older browsers. This helps the browser in maintaining the UI so that it is responsive even when React is trying to make an update that requires some heavy computation. This interruptible evaluation phase and makes the application more fluid. The second phase is the Commit phase in which the evaluated updates are written to the DOM tree. This phase is non-interruptible as conflicting updates might introduce layout thrashing.
Returning Array Of Components
React Fiber supports returning array of components from the render function instead of a single top level component. Earlier if a render function wanted to return multiple components, it had to do it by wrapping those components in a top level dummy component like span in case of web platform. This worked just fine but caused major problems when the application used flexbox for styling. React Fiber is finally solving this much hated problem and more! Render functions can even return nested arrays of components or just text.
Web developers are in a constant fight to reduce the time of first paint for the user. It is a common practice to use Server Side Rendering for rendering React applications before pushing the content to the user. This becomes problematic at times when the render takes considerable amount of time, probably because of some heavy computation. Until that processing is complete, the server cannot send the rendered application. React Fiber solves this by including a streaming renderer. Content can be streamed as soon as it is processed. This makes it possible to send static content like analytics embeds from various providers like Mixpanel and Adobe Analytics beforehand by keeping them in top level components. Data from these can be combined later using integrations from an aggregator like Segment. This feature has been requested for a long time and it will most likely make the cut for the initial release of Fiber.
Different Priorities For Different Updates
React Fiber introduces the concept of different priorities for different kind of updates. Some updates might be high priority, like animation updates, but they might be happening via deeper nested components. In that case if the parent components require some heavy processing, that would block the animation update in child components in current version of React. React Fiber solves this by assigning a higher update priority to animation updates. Different kinds of updates get arranged in different order. There are other concerns here which the react team needs to solve. Jumping the queue should be allowed for some updates but starving other updates must not happen. Another issue is that component lifecycle events might go out of sync because of this reordering. Facebook’s React team is currently looking into a graceful lifecycle event path. This architecture also makes componentWillMount event questionable and there have been requests to remove it as well.
Parallelized Updates to The Tree
React Fiber makes it possible to make parallelized updates to the DOM. In the current version updates are applied recursively to the components in the order that they are listed.
React Fiber is definitely going to make a lot of applications work better out of the box. The official semantic version it will hold is 16.0.0. If your application works fine with version 15.5.4, it will continue to work seamlessly with React Fiber once you upgrade.
Google is one of the most prominent firms in the market of Information and Technology. This is because, google is not only responsible for its software, but it is one of the most important companies producing search results on the internet. Many companies depend on Google search engine to make sure that their brand is displayed on the top of search result page. This is done by the artificial intelligence of the company which is developed under the name of Google Rank Brain. The algorithm was developed by the company, by the search engineers who were tested, versus the Google Rank Brain. The engineers were made to guess the top results that Google will display for a particular query. The Google Rank Brain was tested for the same. The results concluded that the engineers had 70 percent success rate and the Google Rank Brain had 80 percent success rate. This made it one of the primary choices of the companies who look forward to improving their online presence. Let us look at a detailed study carried out by software engineers in this article.
Google Rank Brain is an algorithm written on the basis of hundreds of mathematical calculations called vector measurements. This algorithm is written in such a way that the computer can understand the code for the search engine to be optimized. The main job of the Google Rank Brain is to boost the results for Google queries. For example, if the user types a certain work in the query box, the Google Rank Brain searches for the phrase in its database related to that word and helps in finding accurate results. This makes the task more effective and efficient.
Another important feature of the Google Rank Brain is that it can learn new things based on the user’s way of searching for the results. The Google Rank Brain is thereby very efficient in learning new phrases and recognizing the pattern of the searching.
Importance of Google Rank Brain
We are in that phase of the era where every little thing is being shifted to the online platform. It is very important to walk hand-in-hand with technology to make sure that you do not miss out on potential customers and associates. You can only make your business successful if you have good online and offline presence in the market. For impressive online presence, you should know the basics such as search engine optimization. Google Rank Brain is responsible for as much as 15% of search results on the internet.
There are different factors responsible for the query results and one of the most important is Google Rank Brain. This is because the algorithm is written in such a way that the top results will be displayed according to the phrase in the database of the search engine. This gives accurate search results.
Google Rank Brain and SEO
Search Engine Optimization or SEO is one of the most important factors that any company should keep in mind if they want to have an impressive and influential online presence. For this, the content writers of the website should be aware of the keywords being used by the customers while searching for a particular query. The more accurate the keyword, more attention the website will receive. Google Rank Brain is completely based on the phrases and keywords. If the content writers use a perfect set of keywords based on which services, the company is offering.
When the correct set of keywords is used, the Google Rank Brain will link the keywords with the keywords in the database and link it to the websites that are ranked based on the algorithms.
Future of Google Rank Brain
Google Rank Brain was first rolled out in the market in the year 2015. Back then, the software engineers had the made the algorithms in such a way that they remained constant till the script was updated and was rolled out on the internet. Gradually, the changes were made in the script in such a way that it becomes more user-friendly and interactive. As the users enter queries, the Google Rank Brain learns the pattern and then feeds the pattern to the script. This makes the script more interactive and you get better results next time you search on the internet.
Graphic designing is not a cakewalk and requires the user to enroll into design schools and cultivate high-level skills and experience in the field. This often takes a toll on non-professionals and students who toil with designing. Fortunately, now, all those who out there are struggling to put together creative and visual presentations and lack professional skills of an experienced designer should fret no more. The creative team at Adobe has exceptionally constructed an easy to use designing application namely Adobe Spark, which aims to assist non-professionals and students in creating beautiful visual presentations, web stories, E-invites, animated videos and what not! The application is not only free of cost but is also extremely facile to use. One of the striking features of its usage is its popularity among children. The colorful and bright interface evokes interest and a feeling of enjoyment among not just kids, but adults as well. Let’s look at some of the most creative ways in which Adobe Spark can be put to optimum use.
Thinking about creating a website to share information and promote the newly created handicrafts, bakery or confectionary business, or even accounts of adventure and adrenaline filled solo trips? One can simply create beautiful and creative websites without prerequisites of technical knowledge about HTML using the Page option in Adobe Spark. The user is primarily required to create an account which is absolutely free, then they must pick the most suitable themes accordingly provided in Theme Gallery and choose from an exquisite range of fonts in order to beautify it further. One can upload their own favorite photographs or even choose from the photos already provided in Spark, and then they can customize it with more creative elements and preview the work done. The website will be assigned a new URL and one can subsequently share it with friends and family across social media. Users can even put together travel blogs, web stories, and even newsletters by using the Page feature in the application.
School projects are no more monotonous!
Projects are sometimes too boring and tedious, and so children end up procrastinating and avoiding it till the eleventh hour. With Adobe Spark, school projects can be made more fun-filled, engaging and interesting. The Video feature allows its user to create tutorials, presentations and even animated videos within a short span of time. Students can choose from a variety of colorful templates, gorgeous typography and eye-catching themes in order to make their projects look more appealing and creative. They can even record an audio and incorporate it into the animated videos to instill life into the stories and characters created by them. Teachers and educators can easily play with this innovative application and redefine the traditional assignments and projects, such that the students enjoy and look forward to working with them.
E-invites and save the date postcards
In order to share announcements related to birthdays, engagements, weddings, graduation and even christening ceremony, one eagerly looks forward to sharing them in the most unique and beautiful ways, so that the invitees take the time to read and indulge in the creativity of the invites. In an era of technology, facilities like E-invites save time and are considered much more economical as compared to the traditional physical invites. The Spark Post feature allows the users to put together elegant and artistic E-invites and save the date postcards which would catch the readers’ eyes. This feature is a great savior of time since one can share the invitations via email or even download them for printing. This feature can help design advertisements, pamphlets, flyers, and even quotes in the most innovative and aesthetic ways that are definitely going to entice the readers.
After creating design applications like Photoshop, Illustrator, and Lightroom, Adobe has finally put together an application like Spark which doesn’t need the users to possess any pre-required design skills or knowledge about graphic designing. Due to the versatility of the features in the app, the eye-catching and colourful user interface which makes it popular among both children and adults, and the numerous creativity elements incorporated, Adobe Spark is a one-stop destination for all the non-professionals out there who are looking forward to stepping into the world of creative and efficient designing, be it at home or workplace or even schools.
Blockchain is gaining popularity as an emerging trend and upcoming advancement in the field of technology that ensures the security and management of a database. It proves to be a convenient source for record or data management, identity management, transaction processing and documenting, where the system is resistant to any form of alteration or modification by a third party. The database manages an increasing collection of records known as blocks where each block is linked to a previous block. Crafted to secure data from being misused, Blockchain technology enables to maintain records and transitions between two parties in an efficient and verifiable manner.
Designed specifically to protect data from being tampered, Blockchains utilizes a distributed timestamping server where databases are maintained autonomously and the open distributed ledger is programmed to process transactions automatically. Serving as a public ledger for all potential transactions, the concept was first introduced by Satoshi Nakamoto almost a decade ago where this concept was applied to the digital currency Britcoin, which facilitated transactions as a public and decentralized ledger. The development of Blockchain technology that facilitated the requirements of Britcoin was the first in its digital field to provide a solution to double spending, where the technology does not include the involvement of an authority or a centralized server.
New developments in Blockchain technology provides a platform to secure transaction processing through a decentralized and distributed public ledger where online records are effectively managed and maintained without the interference of a third party. This enables the process of auditing and data management to be maintained in a cost-effective manner. Blockchain technology has created a rising demand for itself in a market where people prioritize the aspect of data security. Blockchain negates the possibility of the replication of a database or digital asset. While providing a solution to the problem of double spending, the technology ensures that every segment of data is only transferred once. Proving to be an effective method of managing transactions, Blockchain is a safer option compared to traditional methods of data management. Consisting of transactions and blocks, the functioning of Blockchain technology offers a systematic approach to the decentralized management of online data. The technology can secure and manage large-scale transactions through distribution and decentralization of the digital ledger. This is, therefore, a trusted source of database management as it aims to maintain quality and safety of online interactions.
Defining Digital Trust
Utilizing the concept of Cryptography, the technology aims to ensure that data cannot be altered or modified once recorded. Being an ethical form of data management, Blockchain enabled transparency to fight against the unethical trade of jewelry and diamonds, ensuring the industry to comply with the government regulations. Having started off as transaction and data processing system in banks, the evolving technology has come a long way to making its mark in the global market. Retailers are on the move to utilize this technology in order to facilitate transparent online transactions and clarity in the management of inventory. Business organizations believe that the technology has the potential and scope to enable trustworthy financial transitions in the digital front. Humaniq is a financial services app powered by the Ethereum blockchain. The startup aims to provide formal banking access to people who currently do not enjoy the privilege of systematized and transaction services.Integrating with the global economy, Humaniq aims to introduce it's breakthrough application using blockchain and biometric technologies to provide over 2 billion people with financial inclusion.
Being a public ledger for Bitcoin transactions, Blockchain technology ensures a safe and transparent environment for online transactions. The industry has witnessed a significant rise in the demand for Blockchain Technology by the integration cost effective transaction management to facilitate a trustworthy platform to the public. The global trend has shifted from a centralized authority or a controlling third party to enable the functioning of a decentralized and autonomous transaction platform that ensures security and preservation of the original data. The significant innovation in technology has enabled the formation of an independent body through mass collaboration that enforces authentication and confidence of online information. Carving itself a niche in the market of financial transactions, the technology identifies its widespread impact on the global demand for this technology.
New developments in Blockchain technologyeffectively ensures immutability and transparency following ethical and trustworthy means to facilitate online transactions. This innovation aims to have a lasting impact on the trustworthy trade of digital currency and ensures transparency while recording transaction and information publicly through a decentralized digital ledger. Yet to gain complete prominence over the market, the rising technology has proved to show potential to an exponential growth through the maintenance of an incorruptible public interface.
Project managers are spoilt for choice when it comes to the various alternatives available in the market today. Picking one among these various project management tools is essentially a question of what features you need for your business and which of these tools meet all of your needs at the most affordable price. Given that most project management price their product based on the number of users in the system, the outgoing expense can quickly escalate.
Project Management Tool Comparison By Price
In this showdown, we are going to compare four project management tools that are priced in their own unique way. Hubbion is a tool that is completely free to use. Trello offers most essential features free of cost while restricting other important features while Wrike has very few features for free while most of the other important project management features are restricted to paid users. Basecamp has no free version, but has a fixed outgoing fee of $99/month.
Now let us compare the features offered by each of these tools. Each of these tools has its own take on how many users can be permitted to collaborate, how many projects they may collaborate on, the size of the files that can be uploaded and the storage limit for documents.
Users: Hubbion is open to unlimited number of users, and so is Trello. Wrike on the other hand is free for up to five users. For larger teams, you will need to pick a paid plan that starts at $9.80/user/month. Basecamp, as mentioned already, requires you to pay a fixed $99/month fee for unlimited number of users. Another difference to note here is team structure. Both Hubbion and Trello let users to collaborate with multiple teams at the same time. That is, if you are a third party contractor and work with multiple clients, you can do so from a single dashboard on Hubbion. However, users on Wrike and Basecamp are managed by a team administrator. That is, you only get added to a single team where you can collaborate.
Projects: The next feature that is commonly a differentiating factor among the various alternatives is the number of projects you can create. All tools offer unlimited number of projects. However, if you are free user, then you can only do so from Hubbion, Trello or Wrike (for 5 users). Paid users have the option to create unlimited projects on all platforms, including Basecamp.
File Storage Limits: Depending on what business you are in, the free version may or may not be a good fit for your needs. Businesses that deal with large file sizes would find Hubbion the best among all these tools. It offers unlimited file size uploads along with no visible cap on the total storage size. Trello has a 10 MB cap on the file size while Wrike limits the overall file storage to 2 GB. Paid users of Trello can attach files of up to 250 MB while those on Wrike can store as much as 5 GB (going up to 100 GB) depending on the plan you choose.
Cloud storage services have seen a massive increase in the number of users in the last few years - both in the personal storage space and the business use case. This increase has come with a lot of scaling challenges for the service providers. One such challenge is to implement a good resource sharing management system. The users may want to share their content with others; this is fine in ordinary conditions but when a user shares the content with a large audience, your service takes a hit due to hotlinking.
There are many other problems in this space. The content that has been shared might be of illegitimate origins or might contain offensive material, so the service becomes a vector for illegal activity. This is particularly troublesome in the case of pirated content. Another problem is that the shared resource might cause an unintentional denial of service attack on the service in case it is shared widely. The service would not be able to collect any meaningful analytics either. What if the user wants to consume the content themselves but cannot login every time to reach it? This is very common when a download manager is used for fetching the resource, or when the user wants to resume the resource download at a later time. What if the user wants to share the content with a limited set of people and would like that the resource URL expires after a certain time period? Solutions to all these problems and more find their use in cloud storage services like the Amazon S3, Google Cloud and Virtual Data Rooms.
URL signing is a scalable solution to the problems mentioned above. The idea is very simple - each resource URL is generated in a way that it is unique and is identifiably linked to the creator of that URL. This is done by including an identification object in JSON format in the URL as a parameter, which is encrypted as per the JOSE standard. This object can contain a number of claims which identify the issuer of the URL, the expiration time, the start time, the scope of sharing (which identifies who all have access to the content other than the creator), and the sharing policy (public vs. private vs. login required). When a request is received, the service provider verifies it using Public-key cryptography and denies all invalid requests.
URL signing comes with some caveats too, the biggest one being difficulty in implementing caching strategies. Let’s say that a user looks at a picture on a website. This picture is served to the user via a signed URL. If the appropriate headers are set on the resource, the browser will cache it and link it to the URL it came from. However with signed URLs, unless you maintain a state on the server side, or design a stateless algorithm that issues the same signed URL within a specified duration, the browser will not be able to leverage the cached image since the URL signature would change the next time the user looks at the picture. This is also problematic when the resource has a large size, so it takes considerable time to download it. In that case the user may want to download a portion of the resource later on but resume would not work if the signed URL changes. The increased degree of difficulty in implementation is well rewarded with the benefits that come with signed URLs though.
Both Google Cloud and Amazon S3 provide first class access to URL signing in their platforms, which differ a little in their implementation details. More details can be found in their respective documentations here and here.
With any new technology, there’s “fake news”, and SD-WANs are no exception. It’s true, SD-WANs probably won’t reduce your WAN costs by 90 percent or make WANs so simple a 12-year old can deploy them. But there are plenty of reasons to be genuinely excited about the technology -- and we’re not just talking about cost savings. Often, these “other” reasons get lumped into the catechisms of greater “agility” and “ease of use,” but here’s what all of that really means.
Align the Network to Business Requirements
When organizations purchase computers for employees, they try to maximize their investment by aligning device cost and configuration to user function. Developers receive machines with fast processors, plenty of memory, and multiple screens. Salespeople receive laptops and designers get great graphics adapters (and Apples, of course).
Mission critical locations, such as datacenters or regional hubs. These can be connected by active-active, dual-homed fiber connections managed and monitored 24x7 by an external provider -- and with a price tag that approaches MPLS.
A single, xDSL connection. This can connect small offices or less critical locations for significant savings as compared against MPLS.
Short-term connections. These can be set up with 4G/LTE and, depending on the service, mobile users can be connected with VPN clients.
All are governed by the same set of routing and security policies used on the backbone. By adapting the configuration to location requirements, businesses are able to improve their return on investment (ROI) from SD-WANs.
Easy and Rapid Configuration
For years, WAN engineering has meant learning CLIs and scripts, mastering protocols like BGP, OSPF, PBR, and more. It was an arcane art, and CCIEs were the master craftsmen of the trade. But for many companies, managing their networks in this way is too expensive and not very scalable. Some companies lack the internal engineering expertise, others have the expertise, but far too many elements in their networks.
SD-WANs may not make WANs simple, but they do allow your networking engineers to be more productive by making WANs much easier to deploy and manage. The “secret sauce” is extensive use of policies.
Policy configuration helps eliminate “snowflake” deployments, where some branch offices are configured slightly differently than other offices. Policies allow for zero-touch provisioning and deployment. Policies also guide application behavior, making it easier to deliver new services across the WAN without adversely impacting the network. With an SD-WAN, you really can drop-ship an appliance to Ittoqqortoormiit, Greenland and have just about anyone install the device.
Limit Spread of Malware
SD-WANs position an organization to stop attacks from across the WAN. The MPLS networks that drive most enterprises were deployed at a time when threats predominantly came from outside the company. “Security” meant protecting the company’s central Internet access point and deploying endpoint security on clients. Once inside the enterprise, though, many WANs are flat-networks with all sites being able to access one another. Malware can move laterally across the enterprise easily, as happened in the Target breach that exposed 40 million customer debit and credit card accounts.
SD-WANs start to address some of these challenges by segmenting the WAN at layer three (actually, layer 3.5, but let’s not get picky) with multipoint IPsec tunnels. The SD-WAN nodes in each location map VLANs or IP address ranges to the IPsec tunnels (the “overlays”) based on customer-defined policies. Users are limited to seeing and accessing the resources associated with that overlay. As such, rather than being able to attack the complete network, malicious users can only attack the resources accessible from their overlays. The same is true with malware. Lateral movement is limited to other endpoints in the overlay -- not the entire company.
Don’t Sweat the Backhoe
As much as MPLS service providers manage their backbones, none of that would protect you from the errant backhoe operator, the squirrels, or anyone of a dozen other “mishaps” that break local loops. Redundant connections are what’s needed.
With MPLS, that would normally mean connecting a location with an active MPLS line and a passive Internet connection that’s only used for an outage. Running active-active is possible, but can introduce routing loops or make route configuration more complicated. Failover between lines with MPLS is based on DNS or route convergence, which takes too long to sustain a session. Any voice calls, for example, in process at the moment of a line outage will be disrupted as the sessions switch onto a secondary line.
With SD-WANs use of tunneling, running active-active is not an issue. The SD-WAN node will load balance the connections and maximize their use of available bandwidth. Determination to use one path or another is driven by the same user-configured traffic policies that drive the SD-WAN. Should there be a failure, some SD-WANs can failover to secondary connections (and back) fast enough to preserve the session. The customer’s application policies continue to determine access to the secondary line with the additional demand.
Conventional enterprise wide area networks are a hodgepodge of routers, load balancers, firewalls, next generation firewalls (NGFW), anti-virus and more. SD-WANs change all of that with a single consistent policy-based network, making it far easier to configure, deploy, and adapt the WAN. As SD-WANs adapt to evolve and include security functions as well, the agility and usability of SD-WANs will only grow.
Implementation details about the Microservice can be studied in the source code by loading the project into your preferred Java IDE such as Eclipse.
Before the Microservice can be run inside Docker, the Docker technology must be installed on your local machine. You can follow step-by-step Docker installation procedure at: Docker Installation
Once Docker is installed correctly, you can test your installation using the following command:
docker run hello-world
Create a Microservice Docker Image
In the Docker ecosystem, there are two main concepts to understand.
Docker container: A Docker container is a lightweight instance of a Linux based OS running on top of your host Operating System
Docker image: Docker image represents your Application software + entire environment running inside a container
For the above microservice, the container loads the microservice image, and as part of this image it not only loads the Application Code for the microservice, but also the Java 8 environment it needs to run the microservice.
But, before you can load the microservice into Docker, you need to create a Docker image for that software. The steps to create the image are as follows:
Create a directory next to your microservice project
Copy microservice artifacts to the build directory
CMD java -jar hello-microservice-1.0-SNAPSHOT.jar server hello-microservice.yaml
From the Docker session, goto the hello-microservice-build directory and issue the command
docker build -t hello-microservice-local .
The Docker build process uses a file named Dockerfile to get its instructions about what to do when building an image. In this particular microservice, the Dockerfile instructs the Docker system to download an image called 'java:8'. This is the core infrastructure needed to run the microservice. Next it adds the microservice jar and configuration to the image. And later, it exposes the ports 9000 and 9001 to service the requests.
docker build -t hello-microservice-local . (is the command that processes the Dockerfile and produces the hello-microservice-local image)
Note: make sure this command is issued from the Docker session and not just any command line session.
Once this Java Microservice Docker image is created, it must be run inside a Docker container using the following command:
docker run -p 9000:9000 --name hello-microservice-local -t hello-microservice-local
With the recent exploration of cloud computing technologies, organizations are using cloud service models like infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS) along with cloud deployment models (public, private and hybrid) to deploy their applications.
There is a concept in the cloud world that is based on application characteristics: the concept of cloud-enabled and cloud-centric applications. In this blog post, Dan Boulia provides a concise explanation about the concept.
You can say that a cloud-enabled application is an application that was moved to cloud, but it was originally developed for deployment in a traditional data center. Some characteristics of the application had to be changed or customized for the cloud. On the other hand, a cloud-centric application (also known as cloud-native and cloud-ready) is an application that was developed with the cloud principles of multi-tenancy, elastic scaling and easy integration and administration in its design.
When developing an application that will be deployed in the cloud, you must keep the cloud principles in mind. They should be taken into account as part of the application. So we come to the first point: Is it better to work within an existing application or to completely redesign it? There is no exact answer because it depends. You have to evaluate the level of effort (labor, time and cost) to transform the application into cloud-enabled versus the effort to completely redesign it to a cloud-centric application.
The second point is: Will my cloud-enabled application work better than a new cloud-centric application? Here I would say no. It’s rare to find an existing traditional application that was developed with any of the cloud principles in mind. It may be possible to construct the same feel (for the user) as a cloud-centric application, but it will not function the same way internally.
Changing an existing application could be easier since you already have the skills and tools in the organization and you won’t need to learn any new technology. However, while it may be easier to change the application, in the long term it will be harder to maintain. New technologies (social media, mobile, sensors) continue to appear and it is becoming more important to integrate them. Doing this will require additional and continuous effort and may exponentially increase development and supporting costs.
Now comes the third point: What can you use to help expedite the move or redevelopment of an existing application to a cloud-centric model? Many cloud companies have development tools that can help an organization on this path. For instance, IBM has recently announced IBM Bluemix, a development platform to create cloud-centric applications. Shamim Hossain explains the capabilities in more detail in his blog post. Another option is to use IBM PureApplication System to expedite the development.
I discussed some points here that I hope can provide a better understand about an important concept in cloud computing and how to address it. Let me know your thoughts on it! Follow me at Twitter @varga_sergio to talk more about it
Come to the first Cloud Foundry Meetup in the Waltham area this coming Wednesday, December 11th!
This meetup is your opportunity to learn more about Cloud Foundry and meet people excited about the technology.
On the agenda is an Introduction to Cloud Foundry: the technology and the community by Chris Ferris of IBM.
This will be followed by a talk by Renat Khasanshyn of Altoros on Implementing Cloud Foundry 2.0.
More information at: //bit.ly/1azS5PX
Managing software and product lifecycle integration has always been a challenge and with the rate of the new demands on the enterprise the challenges are increasing. Leaders from different standards organizations and industry will lead interactive discussions on the importance of open technologies to help enterprises manage the lifecycle activities within their environments. Learn about the direction lifecycle integration is taking as a result of the inclusion of open standards and the importance of this work to you. You will also hear how you can bring forward your requirements and influence the supporting work activities.
The Open Lifecycle Summit will feature short lightning talks and panel discussions with industry leaders such as OASIS CEO Laurent Liscia, Tasktop CEO Mik Kirsten, Opscode VP of Solutions George Moberly, and IBM Fellows Michael Michael Kaczmarski and Kevin Stoodley, and IBM VP of Standards and IBM Cloud Labs, Dr. Angel Diaz.
The Summit is free to attend for all those attending IBM Innovate. Join us for an exciting session and refreshments to start your attendance at Innovate 2013. For more information and to RSVP visit http://ibm.co/16jTusU
The challenges of
virtualized environments are driving the shift to greater integration of
service management capabilities such as image and patch management, high-scale
provisioning, monitoring, storage and security. Join us for this webcast to learn how
organizations can realize the full benefits of virtualization to reduce
management costs, decrease deployment time, increase visibility into
performance and maximize utilization.
Even though server proliferation can be partially addressed through virtualization, the usage of virtual and physical assets becomes complex to accurately assess or manage. Cost management is crucial to integrate into overall service management, especially with a move into cloud. This webcast discusses how to implement a financial management roadmap and the key requirements for cloud transparency-- the ability to allocate IT costs, usage, and value.
As a result of feedback from SmartCloud Enterprise customers
and business partners, IBM is rolling out new enhancements this week.*
In addition to the availability of IBM SmartCloud
Application Services, IBM’s platform-as-a-service offering, new and enhanced
capabilities for IBM SmartCloud Enterprise include:
Platinum M2 VM sizes, now generally available
Alternate Windows Instance Capture, now generally available
Windows Import/Copy pre-release, available by request
Windows 2012 pre-release, available to all users
Cloud Services Framework enhancements
APIs for guest messaging, new and available for all users
ISO 27001 Certification for all IBM SCE data centers
Object storage with enhanced portal integration with SCE
All the details of each new capability/enhancement can be
found on the SCE portal in the “What’s
New in SmartCloud Enterprise 2.2” document (SCE account sign-in is required
to review the document), but here are a few highlights:
IBM SmartCloud Application Services (SCAS)
IBM’s platform as a service -- IBM SmartCloud Application
Services -- runs on top of and deploys virtual resources to IBM SmartCloud
Enterprise. SmartCloud Application Services delivers a secure, automated,
cloud-based environment that supports the full lifecycle of accelerated
application development, deployment and delivery. SCAS provides an
enterprise-class infrastructure, enhanced security and pay-per-use, and allows
clients to differentiate themselves with built-in flexible options that
configure cloud their way – leading to a competitive advantage.
You can find the SmartCloud Application Services offering on
the “Service Instance” tab within your SmartCloud Enterprise account.
As a direct result of client requests, we are offering
additional flexibility and choice in Windows instance capture. Clients can now use
the “Save private image” function with or without the use of Sysprep, the
Microsoft System Preparation tool.
We invite you to learn more about all of these enhancements
via the documentation library in the SCE portal and welcome your feedback.
Thank you for your continued support!
* IBM will roll out these new
capabilities in waves beginning mid-December 2012. IBM’s platform as a service offering, IBM
SmartCloud Application Services, can be found in the “Service Instance” tab
within your SmartCloud Enterprise account.
DevOps has become something of a buzzword lately but the idea behind it can be truly powerful. Using a combination of technology and best practices to increase collaboration between development and operations teams can accelerate the application development lifecycle while improving software quality and reducing costs.
Here’s how IBM is addressing DevOps, with the launch of SmartCloud Continuous Delivery--an agile, scalable and flexible solution for end-to-end lifecycle management that allows organizations to reduce software delivery cycle times and improve quality. Learn more: http://ibm.co/UeAl0B
The challenges of managing virtualized environments are mounting. The benefits of virtualization—from cost and labor savings to increased efficiency—are being threatened by its staggering growth and the resultant complexity. A critical piece to solving these challenges, as many organizations have already discovered, is image management. Read more: http://ibm.co/SpHTlV
Orchestration can be one of those ambiguous concepts in cloud computing, with varying definitions on when cloud capabilities truly advance into the orchestration realm. Frequently it’s defined simply as automation = orchestration.
But automation is just the starting point for cloud. And as organizations move from managing their virtualized environment, they need to aggregate capabilities for a private cloud to work effectively. The automation of storage, network, performance and provisioning are all aspects handled in most cases by various solutions that have been added on over time as needs increase. Even for organizations that take a transformational approach -- jumping to an advanced cloud to optimize their data centers -- the management of heterogeneous environments with disparate systems can be a challenge not simply addressed by automation alone. As the saying goes, “If you automate a mess, you get an automated mess.”
With the proliferation of cloud computing, many businesses are starting
to adopt a service provider model—either as a deliberate strategy to
establish new revenue streams or, in some cases, inadvertently to
support the growing needs of their organizations. This is especially
true for companies with diverse needs, whether they’re tech companies
with dev teams churning out new apps and services, or business owners
driving requirements for SaaS services and cloud capabilities to enhance
their data center operations.
Computing is a term that is often bandied about the web these days and
often attributed to different things that -- on the surface -- don't
seem to have that much in common. So just what is Cloud Computing? I've
heard it called a service, a platform, and even an operating system.
Some even link it to such concepts as grid computing -- which is a way
of taking many different computers and linking them together to form one
very big computer.
basic definition of cloud computing is the use of the Internet for the
tasks you perform on your computer. The "cloud" represents the Internet.
Cloud Computing is a Service
The simplest thing that a computer does is allow us to store and
retrieve information. We can store our family photographs, our favorite
songs, or even save movies on it. This is also the most basic service
offered by cloud computing.
a great example of cloud computing as a service. While Flickr started
with an emphasis on sharing photos and images, it has emerged as a great
place to store those images. In many ways, it is superior to storing
the images on your computer.
Flickr allows you to easily access your images no matter where you are
or what type of device you are using. While you might upload the photos
of your vacation to Greece from your home computer, you can easily
access them from your laptop while on the road or even from youriPhone while sitting in your local coffee house.
Second, Flickr lets you share the images. There's no need to burn them to a compact disc or save them on a flash drive. You can just send someone your Flickr address.
Flickr provides data security. If you keep your photos on your local
computer, what happens if your hard drive crashes? You'd better hope you
backed them up to a CD or a flash drive! By uploading the images to
Flickr, you are providing yourself with data security by creating a
backup on the web. And while it is always best to keep a local copy --
either on your computer, a compact disc or a flash drive -- the truth is
that you are far more likely to lose the images you store locally than
Flickr is of losing your images.
This is also where grid computing comes
into play. Beyond just being used as a place to store and share
information, cloud computing can be used to manipulate information. For
example, instead of using a local database, businesses could rent CPU
time on a web-based database.
downside? It is not all clear skies and violin music. The major
drawback to using cloud computing as a service is that it requires an
Internet connection. So, while there are many benefits, you'll lose them
off if you are cut off from the Web.
Cloud Computing is a Platform
The web is the operating system of the future. While
not exactly true -- we'll always need a local operating system -- this
popular saying really means that the web is the next great platform.
a platform? It is the basic structure on which applications stand. In
other words, it is what runs our apps. Windows is a platform. The Mac OS
is a platform. But a platform doesn't have to be an operating system.
Java is a platform even though it is not an operating system.
Through cloud computing, the web is becoming a platform. With trends such as Office 2.0,
we are seeing more and more applications that were once the province of
desktop computers being converted into web applications. Word
processors like Buzzword and office suites likeGoogle Docs are
slowly becoming as functional as their desktop counterparts and could
easily replace software such as Microsoft Office in many homes or small
But cloud computing transcends Office 2.0 to deliver applications of all shapes and sizes fromweb mashups to Facebook applications to web-based massively multiplayer online role-playing games.
With new technologies that help web applications store some information
locally -- which allows an online word processor to be used offline as
well -- and a new browser called Chrome to push the envelope, Google is a major player in turning cloud computing into a platform.
Cloud Computing and Interoperability
A major barrier to cloud computing is the interoperability of
applications. While it is possible to insert an Adobe Acrobat file into a
Microsoft Word document, things get a little bit stickier when we talk
about web-based applications.
is where some of the most attractive elements to cloud computing --
storing the information on the web and allowing the web to do most of
the 'computing' -- becomes a barrier to getting things done. While we
might one day be able to insert our Google Docs word processor document
into our Google Docs spreadsheet, things are a little stickier when it
comes to inserting a Buzzword document into our Google Docs spreadsheet.
for a moment that Google probably doesn't want you to have the ability
to insert a competitor's document into their spreadsheet, this creates a
ton of data security issues. So not only would we need a standard for
web 'documents' to become web 'objects' capable of being generically
inserted into any other web document, we'll also need a system to
maintain a certain level of security when it comes to this type of data
Possible? Certainly, but it isn't anything that will happen overnight.
What is Cloud Computing?
brings us back to the initial question. What is cloud computing? It is
the process of taking the services and tasks performed by our computers
and bringing them to the web.
What does this mean to us?
With the "cloud" doing most of the work, this frees us up to access the
"cloud" however we choose. It could be a super-charged desktop PC
designed for high-end gaming, or a "thin client" laptop running the
Linux operating system with an 8 gig flash drive instead of a
conventional hard drive, or even an iPhone or a Blackberry.
can also get at the same information and perform the same tasks whether
we are at work, at home, or even a friend's house. Not that you would
want to take a break between rounds of Texas Hold'em to do some work for the office -- but the prospect of being able to do it is pretty cool.
Now 400 millions research papers are available for peace solution,but there is no result for the same,unless the messages posted in the website http://www.goldenduas.com are researched by all the researchers in the world.Otherwise the world cannot peace and unity for the following reasons.
Thank you very much joining with me in the interest of public,Safety and peace in the world.Most of my friends and followers are youngsters and good educated persons involving peace,Unity and safety amongst all communities in the world and accordingly we sought support from all of you to study and analyse the God's messages posted in the website www.goldenduas.com and same may be advertised all over the world on the reasons that every person are suffering,due to all kind of naturalcalamaties in the world.Unless God's messages posted in the website www.goldenduas.com are followed,otherwise No government and Scientist can safeguard life and liberity of the public of the all communities in the world according to Quranic verses 17:16 and 28:59.Internet services in the world and requesting support us to spread our website messages to each and every corner of the world to know and discuss by all the internet communities in the world. Holy Bible says: 1."Behold, I send you forth as sheep in the midst of wolves: be ye therefore wise as serpents, and harmless as doves". - Matthew 10:16. 2."Be strong, do not fear; your God will come, he will come with vengeance; with divine retribution he will come to save you". - Isaiah 35:4 Holy Quran says: 28:59. Nor was thy Lord the one To destroy a population until He had sent to its Centre An apostle, rehearsing to them Our Signs; nor are We Going to destroy a population Except when its members Practise iniquity. Our website http:www.goldenduas.com contains more information not only to avoid all kinds of natural calamities in the world but also to12:15 improve economic growths in business, education, employment, jobs, health, wealth, security, faith, climate changes (heavy snow,rain,heat etc),and causes unity and peace all over the world.Our service all over the world is a non-profitable service to all mankind and animals.
Please check our homepage of the website to know our services. Otherwise, the public of the world will suffer due to all kind of natural calamities till the day of resurrection and also they will fail to improve in economy in businesses,unity,peace,education,health,wealth,security,faith and also climate changes.
Organizations looking to optimize across the application lifecycle recognize the need for enhanced innovation and speed to market. Yet most IT resources are focused on covering the basics, leaving fewer resources to support business agility. The solution: Platform as a Service (PaaS).
IBM’s PaaS solution, IBM SmartCloud Application Services, or SCAS, allows clients to differentiate themselves with built-in flexible services that allow them build and customize cloud solutions their way – leading to a competitive advantage. Companies are using enterprise-class IBM Application Services to measure and respond to market demands, capture new markets, and reduce application delivery and management costs.
What are the benefits of a PaaS solution?
First, with IBM Collaborative Lifecycle Management Service, included within SCAS, development teams can establish shared team development environments in minutes – before it used to take weeks. Within hours they can quickly define their development team and begin working collaboratively to respond to business needs.
Another significant benefit of a PaaS approach is the time it takes to get an application deployed and to market. Application deployment can take weeks on a traditional environment but with IBM SmartCloud Application Services, applications can be deployed to the cloud in minutes.
SCAS also allows clients to respond rapidly to changing market conditions by deploying or modifying cloud-centric (“born on the cloud”) or cloud-enabled (legacy applications) quickly and easily. In fact, developers can move from the dev/test environment directly into production with SCAS, taking advantage of proven repeatable patterns contained within the SmartCloud Application Workload Service, thus eliminating human error. These repeatable patterns allow clients to eradicate errors by avoiding manual processes – this drives consistent results, increases productivity, and reduces risk.
IBM SmartCloud Application Services are compatible with the newly announced IBM PureSystems family. For example, through SmartCloud Application Services clients can rapidly design, develop, and test their dynamic applications on IBM's public cloud and deploy those same application patterns on a private cloud built with PureApplication Systems, or vice versa.
Want to try IBM’s PaaS . . . for free*? IBM SmartCloud Application Services is now in pilot and accepting new client who want to get ready to accelerate their cloud initiatives. Clients won’t pay for SCAS services during the pilot, but will only be charged for the underlying *SmartCloud Enterprise infrastructure used by the services (that’s because SCAS runs on top of IBM’s Infrastructure as a Service offering, SmartCloud Enterprise, or SCE). Existing SCE customers can get up and running on the pilot quickly and start realizing the benefits of PaaS right away.
To be considered for the program, new or existing SCE customers should IBM SmartCloud Application Services web site and click the button on the right titled, “Get a jump on the competition with the SmartCloud Application Services pilot program.”
Who is using IBM SmartCloud Application Services? CLD Partners, a leading provider of IT consulting services with a particular focus on cloud computing, began using SCAS during the beta which launched in 2011 and has now transitioned into the pilot program.
“We share IBM’s vision for how enterprise customers can achieve huge productivity gains by embracing cloud technologies. SCAS allowed us to utilize world class software in a managed environment that greatly reduced the complexity of the deployment while also providing for future scalability that our customers only pay for when they need it,” said Steve Clune, Founder and CEO of CLD Partners. “Ultimately, traditional infrastructure planning and configuration that would have required weeks was literally reduced to hours. And future flexibility as infrastructure needs change is virtually limitless.”
Who would be interested in the SmartCloud Application Services pilot program? IT Operations, Independent Software Vendors (ISVs), Line of Business, and Application Developers would benefit from the SCAS pilot program. And it doesn’t matter the company size, enterprise or mid-market; all types of businesses can realize value from getting their applications to market faster.
One of the exciting and valuable characteristics of IBM SmartCloud Enterprise is it's tight linkage with the IBM Software Group portfolio of offerings. In addition to the offerings from IBM Software Group, innovative software vendors are making exciting offerings available as well. There is an ever-growing list of offerings available to IBM SmartCloud Enterprise customers. These recent additions are now in the SmartCloud Enterprise public catalog and available to you to use.
BYOL - Bring Your Own License; PAYG - Pay As You Go
IBM Business Process Manager is a comprehensive BPM platform giving you visibility and insight to manage business processes. It scales smoothly and easily from an initial project to a full enterprise-wide program. IBM Business Process Manager harnesses complexity in a simple environment to break down silos and better meet customer needs.
The following BPM images are now available in the catalog:
IBM Process Center Advanced 7.5.1 64b - BYOL IBM Process Center Standard 7.5.1 64b - BYOL IBM Integration Designer 7.5.1 64b - BYOL IBM Process Server Advanced 7.5.1 64b - BYOL IBM Process Server Standard 7.5.1 64b - BYOL IBM Process Designer 7.5.1 64b - BYOL, PAYG IBM BPM Express 7.5.1 64b - BYOL, PAYG
IBM WebSphere Service Registry and Repository (WSRR) is a system for storing, accessing and managing information, commonly referred as service metadata, used in the selection, invocation, management, governance and reuse of services in a successful Service Oriented Architecture (SOA). In other words, it is where you store information about services in your systems, or in other organizations' systems, that you already use, plan to use, or want to be aware of.
The following WSRR images are now available in the catalog:
IBM WebSphere Service Registry 64bit BYOL IBM Image IBM WebSphere Service Registry 18.104.22.168 64bit BYOL
IBM WebSphere Message Broker (WMB) delivers an advanced Enterprise Service Bus (ESB) that provides connectivity and universal data transformation for both standard and non-standards-based applications and services to power your SOA.
The following WMB images are now available in the catalog:
IBM WebSphere Message Broker 22.214.171.124 64b BYOL
IBM SPSS Decision Management enables business users to automatically deliver high-volume, optimized decisions at the point of impact to achieve superior results.
The following SPSS image is now available in the catalog
IBM SPSS Decision Management 6.2 64b BYOL
From our partner Riverbed comes Riverbed® Stingray™. This software-based application delivery controller (ADC) designed to deliver faster and more reliable access to public web sites and private applications.
The following Riverbed Stingray images are now available in the catalog:
Riverbed Stingray V 8.0 RHEL 6 32 bit BYOL Riverbed Stingray V 8.0 RHEL 6 64 bit BYOL Riverbed Stingray V 8.0 SLES 11 SP1 32 bit BYOL Riverbed Stingray V 8.0 SLES 11 SP1 64 bit BYOL
Additionally, Alphinat SmartGuide provides visual, drag and drop tools that can help you quickly build interactive web dialogues that guide people to the relevant response, help them diagnose problems or lead them through a series of well-defined steps that make it easy to complete complex—or infrequently performed—tasks.
The following Alphinat SmartGuide images are now available in the catalog:
GridRobotics' Cloud Lab Grid Automation Server can manage any number of client or agent computers, which can be spun up automatically on public clouds like IBM SCE or private clouds. Grid Robotics’ Cloud Lab Classroom is a virtual classroom management solution.
The following GridRobotics Cloud Lab images are now available in the catalog:
GridRobotics Cloud Lab Grid Automation Base Server 1.4 32b R2 - BYOL GridRobotics Cloud Lab Classroom Base Server 1.4 32b R2 - BYOL
GridRobotics Cloud Lab Base Agent V 1.4 32b R2 - BYOL
computing tests the limit of security operations and infrastructure from
various perspectives. Let us examine what
is different about Cloud Security and identify what are existing threats and what
are the new areas that we should be concerned about.
Figure 2 Cloud Security - Existing & New Threats
I think what make cloud security complex is the number of
layers involved in the cloud service stack and the number of components in each
layers. So it means
·Increased infrastructure layers to
manage and protect
·Multiple operating systems and
applications per server
More Components = More Exposure
As we can see we already do perimeter protection at the
network and operating systems as well as do physical and personnel security for
the traditional infrastructure. All of them holds good for cloud as well to combat
the existing threats at these layers.
us examine what are the new points of exposure with cloud. Security and resiliency complexities are raised
by virtualization and automation which are essentials to cloud. The new risks
·Cloud Service Management Vulnerabilities
·Secure storage of VMs and the
·Managing identities on the
increasing number of virtual assets
·Stealth rootkits in hardware now possible
·Virtual NICs & Virtual Hardware
·Virtual sprawl, VM stealing
·Dynamic relocation of VMs
·Elimination of physical boundaries
·Manually tracking software and
configurations of VMs
managing these additional complexities, you need a reference model that is
comprehensive and covers security controls that can combat not only the
existing challenges but also the new challenges that cloud brings in.
Foundational Security controls for IBM cloud reference model (see below)
provides the different elements and controls required to build a secure cloud.
Figure 1 Foundation Security Controls for IBM Cloud
Managing datacenter identities (Identity and access
Management) is one of the top-most security concerns and we discussed how to
handle the same in my previous
post. I’ll discuss how to handle the
virtualization related threats in my next post.
Meanwhile let me know your comments on this reference model.
Do you think these set of controls are comprehensive. Do you see any areas not
covered from a cloud security perspective? If so, just add it as comment to
this post and let us discuss.
Join us for the 2012 IBMSmartCloud
Symposium event on 16-19 April 2012 in San Francisco, California. This
Symposium will help you Rethink IT and Reinvent Business.
event will introduce Cloud Computing’s disruptive potential to not only
reduce cost and complexity but reinvent the way we do business. Over the
course of four days, there will be sessions that define cloud computing
and discuss transformative benefits and challenges to consider while
sharing specific, proven patterns of success. We will provide proven
methods to get started on the Cloud journey from the up-front
investments to capacity planning. This event will cover the technology
behind private and public clouds whether you choose to build your own,
leverage prepackaged solutions or have it delivered as a service.
will explore challenges and solutions for securing, virtualization and
performance of mission critical applications as well as automating
service delivery processes for cloud environments. We will help you:
design, deploy and consume.
challenges for cloud , I discussed Security as the top concern. I also
detailed the top concerns with regard to securing the cloud in the subsequent post.
Cloud computing tests the limits of security operations and infrastructure for
the various security and privacy domains
Cloud brings in lot of additional considerations like
multi-tenancy, data separation, virtualization etc. In a cloud environment,
access expands, responsibilities change, control shifts, and the speed of
provisioning resources and applications increases - greatly affecting all
aspects of IT security.We will discuss
the different security aspects classifying them against specific adoption
patterns (see post here).
The cloud enabled data center pattern is the more predominant one which has Infrastructure
and Identity management as the top concerns.Within cloud security doing the right design
for the infrastructure security is the important aspect – the details of which
and how it is done by different public clouds we discussed in the previous post.
Now with regard to Identity lets discuss the top requirements, use cases and
look at what solutions that we can provide to make the cloud secure. Lets start
with managing datacenter identities which is the top concern.
Managing Datacenter Identities
Identity and Access Control needs to deliver capability that
can be used to provide role based access to securely connect users to the cloud.
The users include the cloud service provider as well as consumer roles. Within
each user groups we need to support User as well as Administrator Roles. The
identity and access management should the 4As - Authentication, Authorization,
Auditing and Assurance.
§For a cloud consumer user, it is
about making sure the user identity is verified and authenticated at the self
service portal and providing right access to the resource pools.
§For the administrator, we need to
provide role based access to Service Lifecycle Management functions
§We will need to integrate with
existing User Directory infrastructure (AD/LDAP/NIS) to extend the user
identity to the cloud environment as well.
§Once in the cloud environment, we
need to automatically manage access to the cloud resources, through provisioning
and de-provision of resource profiles and users against the resources in the cloud
identity and access management systems. Manual processes to manage accounts for
users on various virtual systems and applications are not going to scale in a
cloud environment. The same is true with the manual processes to process
various audit logs to meet compliance and audit requirements
§In massively parallel,
cloud-computing infrastructures involves enormous pools of external users as
well. We need to ensure smooth user experience for the users so that they don’t
need to enter their credentials multiple times to access various applications
hosted within the enterprise or by business partners and Cloud providers.
§Management of user identities and
access rights across hosted, private and hybrid clouds for internal Enterpise
users is also a major challenge that includes
oCentralized user access management to on and off-premise applications
oEnables Federated Single Sign-on and Identity Mediation across
different service providers
Lets look at some of the capabilities that we can leverage
to solution these requiremnts.
IBM Security Identity and Access Assurance - provides
the following capabilities.These
capabilities enable clients to reduce costs, improve user productivity,
strengthen access control, and support compliance initiatives.
and policy-based user management solution that helps effectively manage
Enterprise, Web, and
federated single sign on, inside, outside, and between organizations,
including cloud deployments.
and access support for files, operating platforms, Web, social networks,
and cloud-based applications.
with stronger forms of authentication (smart cards, tokens, one-time
passwords, and so on).
monitoring, investigating, and reporting on user activity across the
Tivoli Identity Manager complements its role management
capabilities with role mining and lifecycle management, provided by the
IBM Security Role and Policy Modeler component, which helps reduce time
and effort to design an enterprise role and access structure, and
automates the process to validate the access information and role
structure with the business.
Security Access Manager for Enterprise Single Sign-On offers wide
platform coverage, strong authentication enhancements, and simpler
deployments.It introduces 64-bit
operating system and application support, a virtual appliance for easier
installation and configuration of the server, expanded support for smart
cards, and simplified profiling.
Tivoli Federated Identity Manager offers additional Open Authorization
(OAuth) authorization standards support, (for business to consumer
deployments and utilization of cloud-based applications and identities),
enhanced security for Secure Hash Algorithm (SHA-2), usability
enhancements, and new Business Gateway capabilities.
As we discussed in my previous post, transparency or more
control is need of the hour with regards to security on the cloud.Let examine how this is done by the popular
cloud providers and understand the method and the technologies. We need to
secure the infrastructure, network, endpoints, applications, processes, data,
and information and overall have a governance to mitigate the risk and meet the
compliance. Let us take the infrastructure to begin with.
The key areas for a security team to design for with regards
to infrastructure security are
logs on all resources – VMs and hypervisors
Let us start looking at the public cloud implementations to
understand how they are managing these aspects.
Almost all the vendors – IBM, Amazon,
provide a means to do SSH with keys to the Guest OS. The protocol runs over SSL
and is authenticated with a certificate and private key which could be
generated by the customer.
SmartCloud is designed with enterprise security as a top priority. Access
to the infrastructure self-service portal and application programming interface
(API) is restricted to users with an IBM Web Identity. The infrastructure
complies with IBM security policies, including regular security scans and controlled
administrative actions and operations. Within our delivery centres, customer
data and virtual machines are kept in the data centre where provisioned, and
the physical security is the same as that for IBM’s own internal data centres.With virtual private network (VPN) option,
customers can isolate their servers in the IBM SmartCloud on a virtual local
area network (VLAN) that can act as an extension of their internal network.
This VPN capability can also be used to create security zones in an Internet-facing
configuration to better protect their servers against attacks.
roles across LotusLive and their access authorizations are recorded in a
Separation of Duty matrix.
security-rich infrastructure: Security configuration reviews
and periodic vulnerability scanning of all systems and infrastructure.
enforcement points providing application security: multi-layered
compliance with periodic programs that address all elements of the service
We will see how the infrastructure
security aspects are dealt with for private clouds in my next post. Stay tuned
and keep those comments coming. I’d some of my readers tell me that the blog
entries are not showing up fine on Internet explorer. While I will make the
effort to fix the issue, please use Firefox or any other browser in the
And if you these posts interesting dont forget to rate the post (click on the stars) and if you got an extra minute do put in a comment on what apsects you find interesting or need discussion.