This is just to deploy a test OpenStack instance to try it, perform demonstrations or to learn how to deploy cloud instances. In short, I decided to have a test OpenStack instance running on my old laptop and if possible have a booking page that lets users request access to this OpenStack setup via Horizon or SSH. I ended up creating this page for booking it - http://tryopenstack.cloudaccess.host, and I have a separate URL for accessing OpenStack: http://tryopenstack.dlinkddns.com (If this URL doesn’t work, it’s because I’ve shut down my laptop).
Installing the Operating System:
Which Operating System do you need? I setup both DevStack and RDO (RPM Distribution of OpenStack). These are two popular free OpenStack distributions. At the time of writing this, Ubuntu 16.04 is great for setting up DevStack. It is geared towards OpenStack developers having an environment to develop and test. However, after having installed the latest release of DevStack, I decided to try out RDO. RDO is deployed using Packstack which is a tool that uses Puppet, an open source configuration management tool, to deploy the various components that make up OpenStack.
As of writing this, DevStack’s latest package sets up OpenStack Pike (which can be confirmed from the nova version – 16.0.0) while the latest stable release of RDO contained OpenStack Ocata.
To setup RDO, CentOS works great, however Fedora may work as well. CentOS is a free Linux distribution based on RedHat Enterprise Linux. The version I installed was the latest minimal version of the server pulled from here: https://www.centos.org/download/. I got the “Everything ISO” version of the installer which has the option to setup the Minimal version.
My laptop has a wireless port, and I didn’t want to physically plug the Ethernet port for convenience reasons. This presents a new problem. Packstack setup recommends that you have NetworkManager disabled on the Operating System. This means DHCP is out of the question. It’s also not recommended to have DHCP on the port while running OpenStack. I initially solved this using a specific IP address corresponding to the MAC address of my wireless port. In a way, it’s static, but it’s still using DHCP to set the same IP address to my wireless port every time. So, I started experimenting with several static IP configurations on my wireless port. Eventually, I was able to figure out that I had to authenticate to my wireless router manually.
To do this, I generated a Hex Code for my SSID and password combination. To do that, use any free WPA PSK generator. Then, I created a wpa_supplicant configuration file called “wpa_supplicant.conf” with the following information.
Then use the following command to authenticate to the wireless router.
As you may have guessed, wlp4s0 is my wireless port. Now, I had to make this persistent on boot, so add the line above to the /etc/rc.d/rc.local file and set the right permissions for this file: chmod +x /etc/rc.d/rc.local.
At this stage, it’s best to reboot your laptop to make sure you can connect to the Internet through the wireless port.
Continue OpenStack setup using PackStack:
It’s not mentioned in the website, but it’s better to disable selinux or set it to permissive. It’s also recommended to have a fully qualified domain name (FQDN). Ensure you have properly set up the /etc/hosts and the /etc/resolv.conf files.
Then perform the following self-explanatory steps;
In my all-in-one laptop setup, contrary to what’s specified in the installation website, it’s best to generate an answer file which you can use to fine-tune what you actually set up.
This comes up with a file (called answer.txt) which contains all the parameters regarding the installation of OpenStack. I disabled certain OpenStack components like Swift (the Object Storage service) and telemetry (the metering service) as this is just a test setup on a laptop.
Then I used the following command to run the installation.
My laptop had around 300 GB of disk space, with 4 cores and 8 GB of RAM. The setup completed in about 30 minutes, and you must see a “Installation completed successfully” message.
It didn’t work the first time (Error: systemd start for rabbitmq server failed) and I asked a question on the Ask OpenStack forum. It hasn’t been answered by the time of this blog post, but I solved it by adding a search keyword with my domain name on my /etc/resolv.conf file and by adding my internal IP (wireless port’s IP) and my domain name in the /etc/hosts file. I will explain how to generate this domain name later. One thing to note is the way you know the installation didn’t succeed is by not getting the “Installation completed successfully” message. You will still be able to access the Horizon dashboard but several things that have to do with the RabbitMQ server will not work.
Once the installation is done, OpenStack sets up several virtual bridges, one for the internal network (your private home network) , one for the external network (the public network) and one as a tunnel. Here’s a screenshot to better explain this;
There are a couple of ways to handle this. The first one, the recommended I’d assume, is to setup the “br-ex” interface to have the IP address that was in the “wlp4s0” interface, and the make the wlp4s0 port, a port in the OVS bridge. If this configuration is needed for someone, I’ll post it here – but for the sake of brevity let me ignore this for now. In short, the wireless interface configuration would change to something like this;
The second way, is to use the default network configured by OpenStack on the external bridge – 172.24.4.0/24 as shown in the screenshot above. This network, as it’s a bridge, can still route traffic via the wireless port to the Internet by default, and the instances that you may deploy will have connectivity to the Internet – sort of like a tunnel (I have proof of this below). However, you would not have the ability to ping the deployed instances from your local wireless network laptops.
Once the installation is done, you can simply point your browser to your static IP address that youconfigured for the laptop, and you should be able to see the OpenStack dashboard. One of the good things I was surprised by, was a “material” theme for the OpenStack interface – inspired by Google’s material design - if you’re familiar with Android development.
I had a Cirros instance stuck in the “queued” state, and I was able to get rid of it (only) from the CLI, using the following command;
glance image-delete <image_ID>. This may have been because of the first failed message I received, as mentioned above.
The rest of the setup is very important. Next, you setup Neutron. I briefly mentioned which internal and external network I setup above. I can provide more details if anyone needs this. Then setup a virtual router to bridge the internal and the external networks using one gateway address. This is a screenshot of the topology from the “second way” I mentioned above.
Then, it’s best to setup your key pairs. You have to copy your public key on the Key Pairs section on Horizon. Most cloud instances need key pairs for authentication and your regular username/password method of authentication will not work.
Then, setup an image – several cloud-init enabled images are available on the Internet, that you can either download and upload to Horizon, or directly import via a URL. I decided to stick with the Cirros image – just for initial testing.
Horizon in the Ocata release has revamped the instance launching screen. It looks much better and is intuitive.
To cut the long story short, I deployed my first instance and it was successful. I also associated a floating IP on the test 172.24.4.0 network. The wireless port acted as the gateway for the internal network – 192.168.0.0 network.
What’s really surprising for a first-time viewer was how it was able to ping google.com in spite of not having the instance registered on my wireless router.
Setup a domain:
This step is quite important, and should be at the top of this page, as this was one of the first things I did. My cheap wireless router had a feature of creating DDNS addresses associated with the public IP (which may change on router reboot) provided by my ISP. So, I generated a URL for my public IP address and opened up the HTTP port (80) on the router. If this URL doesn’t work, it’s because I’ve shut down my laptop.
The idea is if you send me a request via this URL (http://tryopenstack.cloudaccess.host/) I’d be able to open my laptop, create a project, and user for you and let you access the resources. I also setup a booking page to charge for resources via Paypal.
Here’s a diagram I came up in a short time to explain the whole thing to a layman:
I wrote the whole thing in about 30 minutes, and I apologize for grammatical errors.
Things to try in the future:
Convert site to HTTPS, SSL
Try increasing the SWAP space and figure out if it’s possible to use more memory for deployed instances (OpenStack itself takes up about 6 GB of RAM)
Go back to the “first way” of Neutron setup - of using the wireless network as internal network for deployed instances, but this would require real floating IP addresses on the Internet for these instances
Try a multi-node setup
Implement a similar setup on test instances on a public/private cloud
The use of business intelligence and analytics to make decisions is on the rise in corporate America. According to a Gartner report, the the BI and analytics market is expected to grow to $20 billionby 2019. We are already at a stage where over 50% of the analysts and users in organizations have access to self-service tools for business intelligence. This is not surprising given recent reports that suggest that the use of such tools help businesses five times more likely to make faster decisions.
But it’s not just faster decision making that is driving the adoption of business intelligence tools. BI and analytics relies heavily on data to arrive at conclusions. Decisions based on data are likely to be a lot more predictable and trustworthy than those that are based off gut-feelings or consumer surveys.
Deploying business intelligence at work is however much more than simply installing a vendor software. BI tools are only as successful as you want them to be. The most successful businesses are those that see BI as a component of a larger process and culture change within the company. This change requires managers to establish processes that drive larger aggregation of data, processing them and actively pursuing insights from this data to arrive at decisions.
Identify your objectives
The first step towards a successful deployment of BI is understanding your business objectives. A cost reduction project, for instance, would require a system where capital and cash outflow data from all your various warehouses and distribution centers are available at a granular level. On the other hand, if your objective is revenue maximization, then your system will not only need data pertaining to the various SKUs in the market along with their sales and distribution numbers, but also similar data of your competitors. In other words, knowing your business objectives will tell you the kind of data you will need. This will help you establish a system that gathers this data. Without a working system, deploying a BI platform is meaningless.
Pick the right tool
The success of a BI deployment depends to a great extent on the software tools that you use. However, the best BI tool in the market may not be the right one for your business. Your evaluation should include various parameters like the cost of the tool, the size of business it is targeted at and the nature of deployment. According to this listof BI software tools, there are over 339 products available in the market today. While tools like Slemma BI and Grow BI are targeted at the price-sensitive customer, others like Rapid Insight focus on businesses that prefer Windows local installation. Pick a tool that meets not only all your feature needs, but also works within your budget and deployment requirements.
Effect a data-driven culture
Business intelligence is one of the best examples for the popular computing phrase, ‘Garbage in, Garbage out’. In other words, incorrect or insufficient input would bring about incorrect or insufficient output. The only way to break this cycle would be to drive a cultural change within the organization that focuses on gathering data at every level and bringing them together for the decision making process. This is not an easy thing to achieve, especially if you are an enterprise business with hundreds or thousands of employees. The lead, however, needs to come from the top and this push for data-driven decision making is extremely crucial to the success of a BI deployment.
Do you make use of business intelligence software at work? Share us your experience in the comments.
A software review is a process in which the members get together and discuss the different aspects that need to be incorporated in the upcoming products or software. Different sources aid in the review such as test specification, test plans, design documents etc. It basically serves the purpose of bridging the gap between the user requirements and what is actually manufactured. These reviews are performed on different levels that include formal inspection, pairprogramming, and informal walkthroughs.
A software review also fixes architectural errors, bugs, logic flaws and mistypes. The senior and junior engineers figure out the solution and this helps them to get to know the product in more detail.
Why is it important
The software reviews take place many times while the product is being developed. This helps all the members of the team to be on the same page and helps in maintaining consistency. This is crucial as the members work with different styles. Also, the company has many projects that need its attention. Getting the software checked by various associates helps in having a different perspective. The problems and bugs in the software can be easily corrected when these reviews are held. A much better and easier solution can be obtained for the same problem when many minds are at work.
There are different types of software reviews which are discussed below.
Software peer review is done at the developer level. The author takes help of his or her colleagues and fellow developers to test and figure out the technical content of the software. The Linus’s Law states “given enough eyeballs, the bugs are shallow” which loosely translates as having more number of reviewers makes tackling the problems easier. Such type of approach is usually seen in the open source reviews.
Thesoftware management review is done to study about the software’s usage of resources and the overall project status. The management review can be either informal or formal. A formal review follows specific steps. It first prepares and plans the structure of the review. Next, the analysis of the previous software peer reviews is done. In this, the individual preparation is checked and the group or team examination takes place. A follow-up process takes place after this and the formal management review is wrapped up.
Other aspects of software review
Following the software review at the development stage is efficient. The cost of fixing a problem at this level is much lower than fixing it post production during software testing. Also, there is always a downside of recalling the products after dispatching and launching them in the market. This is also considered as defect detection process.
Software reviews also help in training the developers to code minimum error documents. This method becomes more efficient through the years as the members learn to keep the errors at bay. This is the next level in the review which is known as defect prevention process and helps the company in the long run.
As a product is developed at the earliest, it tends to have a greater impact on the further projects. This helps in releasing the product early in the market and making greater profits.
Software Review has a pivotal role in the development of a product that is deployed by the company. It has many advantages, especially at the user end. One of the main aspects of software review is its testing. The cost of developing the software and the required hardware for it is very cheap. That’s why, for example, one can find smartphones that are cheap and expensive. What goes behind the higher brands’ flagship models is their rigorous testing and reviewing. The product is certified and widely accepted by the crowds due to its reviews. Also, a smartphone generally, has an average shelf life of 2 to 3 years. Advanced technology is incorporated at every stage in all the fields of electronics and software reviews happen to be really successful.
Big data has become the blood that pulses through our complex, modern society. But more than simple data acquisition, the challenge is leveraging so much information. From Amazon to the Food and Drug Administration, everyone is looking for ways to use data to boost productivity and improve customer support. And it’s only just begun. According to a Gartner report, more than 75% of companies are now investing or planning to invest in big data imminently.
As with most advancements in technology, necessity has been the mother of invention. The influx of data brought about by the big data revolution has created a real and serious need for technologies capable of analyzing and organizing all this informations. More important is translating this data into timely, actionable feedback and eventual ROI.
A number of technological solutions to the data overload problem have been explored. But after some trial and error, the graphics processing unit (GPU) has emerged as the front-runner. You've probably already heard of a GPU, which originally enabled your computer to deliver a faster, more enjoyable experience for gaming and watching videos. It didn’t take long for innovators to realize the same technology would also process more intense computations faster, and in a more cost-efficient way, than the CPU methods currently in use. In other words, using GPUs for big data analytics means any business can make better informed decisions and in real-time.
Realizing this potential, it was just a matter of writing a new programming language to allow direct interaction with the GPU. Now massive amounts of data could be systematically organized into usable chunks both actionable and precise. Some systems even offer users marketing suggestions and best practices advice.
Change doesn't happen overnight. Nevertheless, the GPU revolution is beginning to spread. In 2010, China’s Tianjin-based Supercomputing Center launched the Tianhe-1A, equipped with 14,336Xeon X5670 processors and 7,168Nvidia Tesla M2050 general purpose GPUs. It was , for a time, the fastest supercomputer on the planet. Since then, GPU-based supercomputers have gone into operation in the U.S., Russia, Switzerland, Italy, and Australia.
An exciting development has been the advent of affordable, software-optimized systems for the business world. A number of developers, Tel Aviv-based SQream, for example, use software to maximize the power efficiency of databases. The result is faster, more comprehensive analytics without the need for expensive supercomputers. From any perspective, this simply makes more sense for the average business than investing millions in hardware.
GPUs and Deep Learning
Where GPU technology really shines is deep learning. Simply put, deep learning is a type of machine learning that runs in a similar manner as the human brain. Deep learning enables computers and other machines to learn and adapt responses according to perceived (or input) behavioral patterns. While this was an option with standard CPU systems, GPUs allow machines to adapt faster and more precisely to new situations.
Big data is opening doors in ways no one would ever have believed just a few decades ago. GPU-based technologies are the keys to unlocking them. It’s an exciting time to be involved in the world of tech. One can only imagine what new developments will come. But without a doubt, big data will continue to change how business is done into the future.
Separate Reconciliation and Renderer
Fiber architecture splits the process of reconciliation of the DOM tree from the actual rendering step. This enables using different types of renderers. The reconciliation step is necessary to compute the difference between the DOM nodes at one point and another point after a state change in the application has happened. Keeping a decoupled render phase helps in using React in places like VR using React-VR project, hardware using React-Hardware project, web using the core ReactJS project, and native components in platforms like iOS and Android using the React-Native project - all the while sharing a common project base. ReactJS codebase is very convoluted at the moment and this makes it difficult for beginners to contribute to it. This rewrite aims to solve that issue as well.
Breaking Reconciliation into Two Phases
React Fiber breaks the reconciliation into two parts: evaluation and commit. The evaluation of changes to be made to the DOM tree is now asynchronous. React uses a virtual DOM to find out the difference current state and the next state efficiently and only propagates the final result to the real DOM. Earlier react used to move down the component tree and apply changes along its way. Now it waits for the browser to give it control using the requestIdleCallback API wherever it is supported and uses a polyfill to support older browsers. This helps the browser in maintaining the UI so that it is responsive even when React is trying to make an update that requires some heavy computation. This interruptible evaluation phase and makes the application more fluid. The second phase is the Commit phase in which the evaluated updates are written to the DOM tree. This phase is non-interruptible as conflicting updates might introduce layout thrashing.
Returning Array Of Components
React Fiber supports returning array of components from the render function instead of a single top level component. Earlier if a render function wanted to return multiple components, it had to do it by wrapping those components in a top level dummy component like span in case of web platform. This worked just fine but caused major problems when the application used flexbox for styling. React Fiber is finally solving this much hated problem and more! Render functions can even return nested arrays of components or just text.
Web developers are in a constant fight to reduce the time of first paint for the user. It is a common practice to use Server Side Rendering for rendering React applications before pushing the content to the user. This becomes problematic at times when the render takes considerable amount of time, probably because of some heavy computation. Until that processing is complete, the server cannot send the rendered application. React Fiber solves this by including a streaming renderer. Content can be streamed as soon as it is processed. This makes it possible to send static content like analytics embeds from various providers like Mixpanel and Adobe Analytics beforehand by keeping them in top level components. Data from these can be combined later using integrations from an aggregator like Segment. This feature has been requested for a long time and it will most likely make the cut for the initial release of Fiber.
Different Priorities For Different Updates
React Fiber introduces the concept of different priorities for different kind of updates. Some updates might be high priority, like animation updates, but they might be happening via deeper nested components. In that case if the parent components require some heavy processing, that would block the animation update in child components in current version of React. React Fiber solves this by assigning a higher update priority to animation updates. Different kinds of updates get arranged in different order. There are other concerns here which the react team needs to solve. Jumping the queue should be allowed for some updates but starving other updates must not happen. Another issue is that component lifecycle events might go out of sync because of this reordering. Facebook’s React team is currently looking into a graceful lifecycle event path. This architecture also makes componentWillMount event questionable and there have been requests to remove it as well.
Parallelized Updates to The Tree
React Fiber makes it possible to make parallelized updates to the DOM. In the current version updates are applied recursively to the components in the order that they are listed.
React Fiber is definitely going to make a lot of applications work better out of the box. The official semantic version it will hold is 16.0.0. If your application works fine with version 15.5.4, it will continue to work seamlessly with React Fiber once you upgrade.
Google is one of the most prominent firms in the market of Information and Technology. This is because, google is not only responsible for its software, but it is one of the most important companies producing search results on the internet. Many companies depend on Google search engine to make sure that their brand is displayed on the top of search result page. This is done by the artificial intelligence of the company which is developed under the name of Google Rank Brain. The algorithm was developed by the company, by the search engineers who were tested, versus the Google Rank Brain. The engineers were made to guess the top results that Google will display for a particular query. The Google Rank Brain was tested for the same. The results concluded that the engineers had 70 percent success rate and the Google Rank Brain had 80 percent success rate. This made it one of the primary choices of the companies who look forward to improving their online presence. Let us look at a detailed study carried out by software engineers in this article.
Google Rank Brain is an algorithm written on the basis of hundreds of mathematical calculations called vector measurements. This algorithm is written in such a way that the computer can understand the code for the search engine to be optimized. The main job of the Google Rank Brain is to boost the results for Google queries. For example, if the user types a certain work in the query box, the Google Rank Brain searches for the phrase in its database related to that word and helps in finding accurate results. This makes the task more effective and efficient.
Another important feature of the Google Rank Brain is that it can learn new things based on the user’s way of searching for the results. The Google Rank Brain is thereby very efficient in learning new phrases and recognizing the pattern of the searching.
Importance of Google Rank Brain
We are in that phase of the era where every little thing is being shifted to the online platform. It is very important to walk hand-in-hand with technology to make sure that you do not miss out on potential customers and associates. You can only make your business successful if you have good online and offline presence in the market. For impressive online presence, you should know the basics such as search engine optimization. Google Rank Brain is responsible for as much as 15% of search results on the internet.
There are different factors responsible for the query results and one of the most important is Google Rank Brain. This is because the algorithm is written in such a way that the top results will be displayed according to the phrase in the database of the search engine. This gives accurate search results.
Google Rank Brain and SEO
Search Engine Optimization or SEO is one of the most important factors that any company should keep in mind if they want to have an impressive and influential online presence. For this, the content writers of the website should be aware of the keywords being used by the customers while searching for a particular query. The more accurate the keyword, more attention the website will receive. Google Rank Brain is completely based on the phrases and keywords. If the content writers use a perfect set of keywords based on which services, the company is offering.
When the correct set of keywords is used, the Google Rank Brain will link the keywords with the keywords in the database and link it to the websites that are ranked based on the algorithms.
Future of Google Rank Brain
Google Rank Brain was first rolled out in the market in the year 2015. Back then, the software engineers had the made the algorithms in such a way that they remained constant till the script was updated and was rolled out on the internet. Gradually, the changes were made in the script in such a way that it becomes more user-friendly and interactive. As the users enter queries, the Google Rank Brain learns the pattern and then feeds the pattern to the script. This makes the script more interactive and you get better results next time you search on the internet.
Graphic designing is not a cakewalk and requires the user to enroll into design schools and cultivate high-level skills and experience in the field. This often takes a toll on non-professionals and students who toil with designing. Fortunately, now, all those who out there are struggling to put together creative and visual presentations and lack professional skills of an experienced designer should fret no more. The creative team at Adobe has exceptionally constructed an easy to use designing application namely Adobe Spark, which aims to assist non-professionals and students in creating beautiful visual presentations, web stories, E-invites, animated videos and what not! The application is not only free of cost but is also extremely facile to use. One of the striking features of its usage is its popularity among children. The colorful and bright interface evokes interest and a feeling of enjoyment among not just kids, but adults as well. Let’s look at some of the most creative ways in which Adobe Spark can be put to optimum use.
Thinking about creating a website to share information and promote the newly created handicrafts, bakery or confectionary business, or even accounts of adventure and adrenaline filled solo trips? One can simply create beautiful and creative websites without prerequisites of technical knowledge about HTML using the Page option in Adobe Spark. The user is primarily required to create an account which is absolutely free, then they must pick the most suitable themes accordingly provided in Theme Gallery and choose from an exquisite range of fonts in order to beautify it further. One can upload their own favorite photographs or even choose from the photos already provided in Spark, and then they can customize it with more creative elements and preview the work done. The website will be assigned a new URL and one can subsequently share it with friends and family across social media. Users can even put together travel blogs, web stories, and even newsletters by using the Page feature in the application.
School projects are no more monotonous!
Projects are sometimes too boring and tedious, and so children end up procrastinating and avoiding it till the eleventh hour. With Adobe Spark, school projects can be made more fun-filled, engaging and interesting. The Video feature allows its user to create tutorials, presentations and even animated videos within a short span of time. Students can choose from a variety of colorful templates, gorgeous typography and eye-catching themes in order to make their projects look more appealing and creative. They can even record an audio and incorporate it into the animated videos to instill life into the stories and characters created by them. Teachers and educators can easily play with this innovative application and redefine the traditional assignments and projects, such that the students enjoy and look forward to working with them.
E-invites and save the date postcards
In order to share announcements related to birthdays, engagements, weddings, graduation and even christening ceremony, one eagerly looks forward to sharing them in the most unique and beautiful ways, so that the invitees take the time to read and indulge in the creativity of the invites. In an era of technology, facilities like E-invites save time and are considered much more economical as compared to the traditional physical invites. The Spark Post feature allows the users to put together elegant and artistic E-invites and save the date postcards which would catch the readers’ eyes. This feature is a great savior of time since one can share the invitations via email or even download them for printing. This feature can help design advertisements, pamphlets, flyers, and even quotes in the most innovative and aesthetic ways that are definitely going to entice the readers.
After creating design applications like Photoshop, Illustrator, and Lightroom, Adobe has finally put together an application like Spark which doesn’t need the users to possess any pre-required design skills or knowledge about graphic designing. Due to the versatility of the features in the app, the eye-catching and colourful user interface which makes it popular among both children and adults, and the numerous creativity elements incorporated, Adobe Spark is a one-stop destination for all the non-professionals out there who are looking forward to stepping into the world of creative and efficient designing, be it at home or workplace or even schools.
Blockchain is gaining popularity as an emerging trend and upcoming advancement in the field of technology that ensures the security and management of a database. It proves to be a convenient source for record or data management, identity management, transaction processing and documenting, where the system is resistant to any form of alteration or modification by a third party. The database manages an increasing collection of records known as blocks where each block is linked to a previous block. Crafted to secure data from being misused, Blockchain technology enables to maintain records and transitions between two parties in an efficient and verifiable manner.
Designed specifically to protect data from being tampered, Blockchains utilizes a distributed timestamping server where databases are maintained autonomously and the open distributed ledger is programmed to process transactions automatically. Serving as a public ledger for all potential transactions, the concept was first introduced by Satoshi Nakamoto almost a decade ago where this concept was applied to the digital currency Britcoin, which facilitated transactions as a public and decentralized ledger. The development of Blockchain technology that facilitated the requirements of Britcoin was the first in its digital field to provide a solution to double spending, where the technology does not include the involvement of an authority or a centralized server.
New developments in Blockchain technology provides a platform to secure transaction processing through a decentralized and distributed public ledger where online records are effectively managed and maintained without the interference of a third party. This enables the process of auditing and data management to be maintained in a cost-effective manner. Blockchain technology has created a rising demand for itself in a market where people prioritize the aspect of data security. Blockchain negates the possibility of the replication of a database or digital asset. While providing a solution to the problem of double spending, the technology ensures that every segment of data is only transferred once. Proving to be an effective method of managing transactions, Blockchain is a safer option compared to traditional methods of data management. Consisting of transactions and blocks, the functioning of Blockchain technology offers a systematic approach to the decentralized management of online data. The technology can secure and manage large-scale transactions through distribution and decentralization of the digital ledger. This is, therefore, a trusted source of database management as it aims to maintain quality and safety of online interactions.
Defining Digital Trust
Utilizing the concept of Cryptography, the technology aims to ensure that data cannot be altered or modified once recorded. Being an ethical form of data management, Blockchain enabled transparency to fight against the unethical trade of jewelry and diamonds, ensuring the industry to comply with the government regulations. Having started off as transaction and data processing system in banks, the evolving technology has come a long way to making its mark in the global market. Retailers are on the move to utilize this technology in order to facilitate transparent online transactions and clarity in the management of inventory. Business organizations believe that the technology has the potential and scope to enable trustworthy financial transitions in the digital front. Humaniq is a financial services app powered by the Ethereum blockchain. The startup aims to provide formal banking access to people who currently do not enjoy the privilege of systematized and transaction services.Integrating with the global economy, Humaniq aims to introduce it's breakthrough application using blockchain and biometric technologies to provide over 2 billion people with financial inclusion.
Being a public ledger for Bitcoin transactions, Blockchain technology ensures a safe and transparent environment for online transactions. The industry has witnessed a significant rise in the demand for Blockchain Technology by the integration cost effective transaction management to facilitate a trustworthy platform to the public. The global trend has shifted from a centralized authority or a controlling third party to enable the functioning of a decentralized and autonomous transaction platform that ensures security and preservation of the original data. The significant innovation in technology has enabled the formation of an independent body through mass collaboration that enforces authentication and confidence of online information. Carving itself a niche in the market of financial transactions, the technology identifies its widespread impact on the global demand for this technology.
New developments in Blockchain technologyeffectively ensures immutability and transparency following ethical and trustworthy means to facilitate online transactions. This innovation aims to have a lasting impact on the trustworthy trade of digital currency and ensures transparency while recording transaction and information publicly through a decentralized digital ledger. Yet to gain complete prominence over the market, the rising technology has proved to show potential to an exponential growth through the maintenance of an incorruptible public interface.
Project managers are spoilt for choice when it comes to the various alternatives available in the market today. Picking one among these various project management tools is essentially a question of what features you need for your business and which of these tools meet all of your needs at the most affordable price. Given that most project management price their product based on the number of users in the system, the outgoing expense can quickly escalate.
Project Management Tool Comparison By Price
In this showdown, we are going to compare four project management tools that are priced in their own unique way. Hubbion is a tool that is completely free to use. Trello offers most essential features free of cost while restricting other important features while Wrike has very few features for free while most of the other important project management features are restricted to paid users. Basecamp has no free version, but has a fixed outgoing fee of $99/month.
Now let us compare the features offered by each of these tools. Each of these tools has its own take on how many users can be permitted to collaborate, how many projects they may collaborate on, the size of the files that can be uploaded and the storage limit for documents.
Users: Hubbion is open to unlimited number of users, and so is Trello. Wrike on the other hand is free for up to five users. For larger teams, you will need to pick a paid plan that starts at $9.80/user/month. Basecamp, as mentioned already, requires you to pay a fixed $99/month fee for unlimited number of users. Another difference to note here is team structure. Both Hubbion and Trello let users to collaborate with multiple teams at the same time. That is, if you are a third party contractor and work with multiple clients, you can do so from a single dashboard on Hubbion. However, users on Wrike and Basecamp are managed by a team administrator. That is, you only get added to a single team where you can collaborate.
Projects: The next feature that is commonly a differentiating factor among the various alternatives is the number of projects you can create. All tools offer unlimited number of projects. However, if you are free user, then you can only do so from Hubbion, Trello or Wrike (for 5 users). Paid users have the option to create unlimited projects on all platforms, including Basecamp.
File Storage Limits: Depending on what business you are in, the free version may or may not be a good fit for your needs. Businesses that deal with large file sizes would find Hubbion the best among all these tools. It offers unlimited file size uploads along with no visible cap on the total storage size. Trello has a 10 MB cap on the file size while Wrike limits the overall file storage to 2 GB. Paid users of Trello can attach files of up to 250 MB while those on Wrike can store as much as 5 GB (going up to 100 GB) depending on the plan you choose.
Cloud storage services have seen a massive increase in the number of users in the last few years - both in the personal storage space and the business use case. This increase has come with a lot of scaling challenges for the service providers. One such challenge is to implement a good resource sharing management system. The users may want to share their content with others; this is fine in ordinary conditions but when a user shares the content with a large audience, your service takes a hit due to hotlinking.
There are many other problems in this space. The content that has been shared might be of illegitimate origins or might contain offensive material, so the service becomes a vector for illegal activity. This is particularly troublesome in the case of pirated content. Another problem is that the shared resource might cause an unintentional denial of service attack on the service in case it is shared widely. The service would not be able to collect any meaningful analytics either. What if the user wants to consume the content themselves but cannot login every time to reach it? This is very common when a download manager is used for fetching the resource, or when the user wants to resume the resource download at a later time. What if the user wants to share the content with a limited set of people and would like that the resource URL expires after a certain time period? Solutions to all these problems and more find their use in cloud storage services like the Amazon S3, Google Cloud and Virtual Data Rooms.
URL signing is a scalable solution to the problems mentioned above. The idea is very simple - each resource URL is generated in a way that it is unique and is identifiably linked to the creator of that URL. This is done by including an identification object in JSON format in the URL as a parameter, which is encrypted as per the JOSE standard. This object can contain a number of claims which identify the issuer of the URL, the expiration time, the start time, the scope of sharing (which identifies who all have access to the content other than the creator), and the sharing policy (public vs. private vs. login required). When a request is received, the service provider verifies it using Public-key cryptography and denies all invalid requests.
URL signing comes with some caveats too, the biggest one being difficulty in implementing caching strategies. Let’s say that a user looks at a picture on a website. This picture is served to the user via a signed URL. If the appropriate headers are set on the resource, the browser will cache it and link it to the URL it came from. However with signed URLs, unless you maintain a state on the server side, or design a stateless algorithm that issues the same signed URL within a specified duration, the browser will not be able to leverage the cached image since the URL signature would change the next time the user looks at the picture. This is also problematic when the resource has a large size, so it takes considerable time to download it. In that case the user may want to download a portion of the resource later on but resume would not work if the signed URL changes. The increased degree of difficulty in implementation is well rewarded with the benefits that come with signed URLs though.
Both Google Cloud and Amazon S3 provide first class access to URL signing in their platforms, which differ a little in their implementation details. More details can be found in their respective documentations here and here.
With any new technology, there’s “fake news”, and SD-WANs are no exception. It’s true, SD-WANs probably won’t reduce your WAN costs by 90 percent or make WANs so simple a 12-year old can deploy them. But there are plenty of reasons to be genuinely excited about the technology -- and we’re not just talking about cost savings. Often, these “other” reasons get lumped into the catechisms of greater “agility” and “ease of use,” but here’s what all of that really means.
Align the Network to Business Requirements
When organizations purchase computers for employees, they try to maximize their investment by aligning device cost and configuration to user function. Developers receive machines with fast processors, plenty of memory, and multiple screens. Salespeople receive laptops and designers get great graphics adapters (and Apples, of course).
Mission critical locations, such as datacenters or regional hubs. These can be connected by active-active, dual-homed fiber connections managed and monitored 24x7 by an external provider -- and with a price tag that approaches MPLS.
A single, xDSL connection. This can connect small offices or less critical locations for significant savings as compared against MPLS.
Short-term connections. These can be set up with 4G/LTE and, depending on the service, mobile users can be connected with VPN clients.
All are governed by the same set of routing and security policies used on the backbone. By adapting the configuration to location requirements, businesses are able to improve their return on investment (ROI) from SD-WANs.
Easy and Rapid Configuration
For years, WAN engineering has meant learning CLIs and scripts, mastering protocols like BGP, OSPF, PBR, and more. It was an arcane art, and CCIEs were the master craftsmen of the trade. But for many companies, managing their networks in this way is too expensive and not very scalable. Some companies lack the internal engineering expertise, others have the expertise, but far too many elements in their networks.
SD-WANs may not make WANs simple, but they do allow your networking engineers to be more productive by making WANs much easier to deploy and manage. The “secret sauce” is extensive use of policies.
Policy configuration helps eliminate “snowflake” deployments, where some branch offices are configured slightly differently than other offices. Policies allow for zero-touch provisioning and deployment. Policies also guide application behavior, making it easier to deliver new services across the WAN without adversely impacting the network. With an SD-WAN, you really can drop-ship an appliance to Ittoqqortoormiit, Greenland and have just about anyone install the device.
Limit Spread of Malware
SD-WANs position an organization to stop attacks from across the WAN. The MPLS networks that drive most enterprises were deployed at a time when threats predominantly came from outside the company. “Security” meant protecting the company’s central Internet access point and deploying endpoint security on clients. Once inside the enterprise, though, many WANs are flat-networks with all sites being able to access one another. Malware can move laterally across the enterprise easily, as happened in the Target breach that exposed 40 million customer debit and credit card accounts.
SD-WANs start to address some of these challenges by segmenting the WAN at layer three (actually, layer 3.5, but let’s not get picky) with multipoint IPsec tunnels. The SD-WAN nodes in each location map VLANs or IP address ranges to the IPsec tunnels (the “overlays”) based on customer-defined policies. Users are limited to seeing and accessing the resources associated with that overlay. As such, rather than being able to attack the complete network, malicious users can only attack the resources accessible from their overlays. The same is true with malware. Lateral movement is limited to other endpoints in the overlay -- not the entire company.
Don’t Sweat the Backhoe
As much as MPLS service providers manage their backbones, none of that would protect you from the errant backhoe operator, the squirrels, or anyone of a dozen other “mishaps” that break local loops. Redundant connections are what’s needed.
With MPLS, that would normally mean connecting a location with an active MPLS line and a passive Internet connection that’s only used for an outage. Running active-active is possible, but can introduce routing loops or make route configuration more complicated. Failover between lines with MPLS is based on DNS or route convergence, which takes too long to sustain a session. Any voice calls, for example, in process at the moment of a line outage will be disrupted as the sessions switch onto a secondary line.
With SD-WANs use of tunneling, running active-active is not an issue. The SD-WAN node will load balance the connections and maximize their use of available bandwidth. Determination to use one path or another is driven by the same user-configured traffic policies that drive the SD-WAN. Should there be a failure, some SD-WANs can failover to secondary connections (and back) fast enough to preserve the session. The customer’s application policies continue to determine access to the secondary line with the additional demand.
Conventional enterprise wide area networks are a hodgepodge of routers, load balancers, firewalls, next generation firewalls (NGFW), anti-virus and more. SD-WANs change all of that with a single consistent policy-based network, making it far easier to configure, deploy, and adapt the WAN. As SD-WANs adapt to evolve and include security functions as well, the agility and usability of SD-WANs will only grow.
I'll make no bones about the fact that I'm a huge fan of Cloud Foundry. It's the right play, by the right people at the right time. Despite all the attempts to dilute the message over the last eleven years, Platform as a Service (or what was originally called Framework as a Service) is about write code, write data and consume services. All the other bits from containers to the management of such are red herrings. They maybe useful subsystems but they miss the point which is the necessity for constraint.
Constraint (i.e. the limitation of choice) enables innovation and the major problem we have with building at speed is almost always duplication or yak shaving. Not only do we repeat common tasks to deploy an application but most of our code is endlessly rewritten throughout the world. How many times in your coding life have you written a method to add a new user or to extract consumer data? How many times do you think others have done the same thing? How many times are not only functions but entire applications repeated endlessly between corporate's or governments? The overwhelming majority of the stuff we write is yak shaving and I would be honestly surprised if more than 0.1% of what we write is actually unique.
Now whilst Cloud Foundry has been doing an excellent job of getting rid of some of the yak shaving, in the same way that Amazon kicked off the removal of infrastructure yak shaving - for most of us, unboxing servers, racking them and wiring up networks is a thankfully an irrelevant thing of the past - there is much more to be done. There are some future steps that I believe that Cloud Foundry needs to take and fortunately the momentum is such behind it that I'm confident of talking about them here without giving a competitor any advantage.
First, it needs to create that competitive market of Cloud Foundry providers. Fortunately this is exactly what it is helping to do. That market must also be focused on differentiation by price and quality of service and not the dreaded differentiation by feature (a surefire way to create a collective prisoner dilemma and sink a project in a utility world). This is all happening and it's glorious.
Second, it needs to increasingly leave the past ideas of infrastructure behind and by that I mean containers as well. The focus needs to be server less i.e. you write code, you write data and you consume services. Everything else needs to be buried as a subsystem. I know analysts run around going "is it using docker?" but that's because many analysts are halfwits who like to gabble on about stuff that doesn't matter. It's irrelevant. That's not the same as saying Docker is not important, it has huge potential as an invisible subsystem.
Fourth, and most importantly, it needs to tackle yak shaving at the coding level. The simplest way to do this is to provide a CPAN like repository which can include individual functions as well as entire applications (hint. Github probably isn't upto this). One of the biggest lies of object orientated design was code re-use. This never happened (or rarely did) because no communication mechanism existed to actually share code. CPAN (in the Perl world) helped (imperfectly) to solve that problem. Cloud Foundry needs exactly the same thing. When I'm writing a system, if I need a customer object, then ideally I should just be able to pull in the entire object and functions related to this from a CPAN like library because lets face it, how many times should I really have to write a postcode lookup function?
But shouldn't things like postcode lookup be provided as a service? Yes! And that's the beauty.
By monitoring a CPAN like library you can quickly discover (simply by examining meta data such as downloads, changes) as to what functions are commonly being used and have become stable. These are all candidates for standard services to be provided into Cloud Foundry and offered by the CF providers. Your CPAN environment is actually a sensing engine for future services and you can use an ILC like model to exploit this. The bigger the ecosystem is, the more powerful it will become.
I would be shocked if Amazon isn't already using Lambda and the API gateway to identify future "services" and Cloud Foundry shouldn't hesitate to press any advantage here. This process will also create a virtuous cycle as new things which people develop that are shared in the CPAN library will over time become stable, widespread and provided as services enabling other people to more quickly develop new things. This concept of sharing code and combing a collaborative effort of the entire ecosystem was a central part of the Zimki play and it's as relevant today as it was then. By the way, try doing that with containers. Hint, they are way too low level and your only hope is through constraint such as that provided in the manufacture of uni-kernels.
There is a battle here because if Cloud Foundry doesn't exploit the ecosystem and AWS plays its normal game then it could run away with the show. The danger of this seems slight at the moment (but it will grow) because of the momentum with Cloud Foundry and because of the people running the show. Get this right and we will live in a world where not only do I have portability between providers but when I come to code my novel idea for my next great something then I'll discover that 99% of the code has already been done by others. I'll mostly need to stitch all the right services and functions together and add a bit extra.
Oh, but that's not possible is it? In 2006, Tom Inssam wrote for me and released live to the web a new style of wiki (with client side preview) in under an hour using Zimki. I wrote an internet mood map and basic trading application in a couple of days. Yes, this is very possible. I know, I experienced it and this isn't 2006, this is 2016!
Cloud Foundry (with a bit of luck) might finally release the world from the endless Yak shaving we have to endure in IT. It might make the lie of object re-use finally come true. The potential of the platform space is vastly more than most suspect and almost everything, and I do mean everything will be rewritten to run on it.
I look forward to the day that most Yaks come pre-shaved. For more read....
Implementation details about the Microservice can be studied in the source code by loading the project into your preferred Java IDE such as Eclipse.
Before the Microservice can be run inside Docker, the Docker technology must be installed on your local machine. You can follow step-by-step Docker installation procedure at: Docker Installation
Once Docker is installed correctly, you can test your installation using the following command:
docker run hello-world
Create a Microservice Docker Image
In the Docker ecosystem, there are two main concepts to understand.
Docker container: A Docker container is a lightweight instance of a Linux based OS running on top of your host Operating System
Docker image: Docker image represents your Application software + entire environment running inside a container
For the above microservice, the container loads the microservice image, and as part of this image it not only loads the Application Code for the microservice, but also the Java 8 environment it needs to run the microservice.
But, before you can load the microservice into Docker, you need to create a Docker image for that software. The steps to create the image are as follows:
Create a directory next to your microservice project
Copy microservice artifacts to the build directory
CMD java -jar hello-microservice-1.0-SNAPSHOT.jar server hello-microservice.yaml
From the Docker session, goto the hello-microservice-build directory and issue the command
docker build -t hello-microservice-local .
The Docker build process uses a file named Dockerfile to get its instructions about what to do when building an image. In this particular microservice, the Dockerfile instructs the Docker system to download an image called 'java:8'. This is the core infrastructure needed to run the microservice. Next it adds the microservice jar and configuration to the image. And later, it exposes the ports 9000 and 9001 to service the requests.
docker build -t hello-microservice-local . (is the command that processes the Dockerfile and produces the hello-microservice-local image)
Note: make sure this command is issued from the Docker session and not just any command line session.
Once this Java Microservice Docker image is created, it must be run inside a Docker container using the following command:
docker run -p 9000:9000 --name hello-microservice-local -t hello-microservice-local
With the recent exploration of cloud computing technologies, organizations are using cloud service models like infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS) along with cloud deployment models (public, private and hybrid) to deploy their applications.
There is a concept in the cloud world that is based on application characteristics: the concept of cloud-enabled and cloud-centric applications. In this blog post, Dan Boulia provides a concise explanation about the concept.
You can say that a cloud-enabled application is an application that was moved to cloud, but it was originally developed for deployment in a traditional data center. Some characteristics of the application had to be changed or customized for the cloud. On the other hand, a cloud-centric application (also known as cloud-native and cloud-ready) is an application that was developed with the cloud principles of multi-tenancy, elastic scaling and easy integration and administration in its design.
When developing an application that will be deployed in the cloud, you must keep the cloud principles in mind. They should be taken into account as part of the application. So we come to the first point: Is it better to work within an existing application or to completely redesign it? There is no exact answer because it depends. You have to evaluate the level of effort (labor, time and cost) to transform the application into cloud-enabled versus the effort to completely redesign it to a cloud-centric application.
The second point is: Will my cloud-enabled application work better than a new cloud-centric application? Here I would say no. It’s rare to find an existing traditional application that was developed with any of the cloud principles in mind. It may be possible to construct the same feel (for the user) as a cloud-centric application, but it will not function the same way internally.
Changing an existing application could be easier since you already have the skills and tools in the organization and you won’t need to learn any new technology. However, while it may be easier to change the application, in the long term it will be harder to maintain. New technologies (social media, mobile, sensors) continue to appear and it is becoming more important to integrate them. Doing this will require additional and continuous effort and may exponentially increase development and supporting costs.
Now comes the third point: What can you use to help expedite the move or redevelopment of an existing application to a cloud-centric model? Many cloud companies have development tools that can help an organization on this path. For instance, IBM has recently announced IBM Bluemix, a development platform to create cloud-centric applications. Shamim Hossain explains the capabilities in more detail in his blog post. Another option is to use IBM PureApplication System to expedite the development.
I discussed some points here that I hope can provide a better understand about an important concept in cloud computing and how to address it. Let me know your thoughts on it! Follow me at Twitter @varga_sergio to talk more about it
Come to the first Cloud Foundry Meetup in the Waltham area this coming Wednesday, December 11th!
This meetup is your opportunity to learn more about Cloud Foundry and meet people excited about the technology.
On the agenda is an Introduction to Cloud Foundry: the technology and the community by Chris Ferris of IBM.
This will be followed by a talk by Renat Khasanshyn of Altoros on Implementing Cloud Foundry 2.0.
More information at: //bit.ly/1azS5PX
Managing software and product lifecycle integration has always been a challenge and with the rate of the new demands on the enterprise the challenges are increasing. Leaders from different standards organizations and industry will lead interactive discussions on the importance of open technologies to help enterprises manage the lifecycle activities within their environments. Learn about the direction lifecycle integration is taking as a result of the inclusion of open standards and the importance of this work to you. You will also hear how you can bring forward your requirements and influence the supporting work activities.
The Open Lifecycle Summit will feature short lightning talks and panel discussions with industry leaders such as OASIS CEO Laurent Liscia, Tasktop CEO Mik Kirsten, Opscode VP of Solutions George Moberly, and IBM Fellows Michael Michael Kaczmarski and Kevin Stoodley, and IBM VP of Standards and IBM Cloud Labs, Dr. Angel Diaz.
The Summit is free to attend for all those attending IBM Innovate. Join us for an exciting session and refreshments to start your attendance at Innovate 2013. For more information and to RSVP visit http://ibm.co/16jTusU
The challenges of
virtualized environments are driving the shift to greater integration of
service management capabilities such as image and patch management, high-scale
provisioning, monitoring, storage and security. Join us for this webcast to learn how
organizations can realize the full benefits of virtualization to reduce
management costs, decrease deployment time, increase visibility into
performance and maximize utilization.
Even though server proliferation can be partially addressed through virtualization, the usage of virtual and physical assets becomes complex to accurately assess or manage. Cost management is crucial to integrate into overall service management, especially with a move into cloud. This webcast discusses how to implement a financial management roadmap and the key requirements for cloud transparency-- the ability to allocate IT costs, usage, and value.
As a result of feedback from SmartCloud Enterprise customers
and business partners, IBM is rolling out new enhancements this week.*
In addition to the availability of IBM SmartCloud
Application Services, IBM’s platform-as-a-service offering, new and enhanced
capabilities for IBM SmartCloud Enterprise include:
Platinum M2 VM sizes, now generally available
Alternate Windows Instance Capture, now generally available
Windows Import/Copy pre-release, available by request
Windows 2012 pre-release, available to all users
Cloud Services Framework enhancements
APIs for guest messaging, new and available for all users
ISO 27001 Certification for all IBM SCE data centers
Object storage with enhanced portal integration with SCE
All the details of each new capability/enhancement can be
found on the SCE portal in the “What’s
New in SmartCloud Enterprise 2.2” document (SCE account sign-in is required
to review the document), but here are a few highlights:
IBM SmartCloud Application Services (SCAS)
IBM’s platform as a service -- IBM SmartCloud Application
Services -- runs on top of and deploys virtual resources to IBM SmartCloud
Enterprise. SmartCloud Application Services delivers a secure, automated,
cloud-based environment that supports the full lifecycle of accelerated
application development, deployment and delivery. SCAS provides an
enterprise-class infrastructure, enhanced security and pay-per-use, and allows
clients to differentiate themselves with built-in flexible options that
configure cloud their way – leading to a competitive advantage.
You can find the SmartCloud Application Services offering on
the “Service Instance” tab within your SmartCloud Enterprise account.
As a direct result of client requests, we are offering
additional flexibility and choice in Windows instance capture. Clients can now use
the “Save private image” function with or without the use of Sysprep, the
Microsoft System Preparation tool.
We invite you to learn more about all of these enhancements
via the documentation library in the SCE portal and welcome your feedback.
Thank you for your continued support!
* IBM will roll out these new
capabilities in waves beginning mid-December 2012. IBM’s platform as a service offering, IBM
SmartCloud Application Services, can be found in the “Service Instance” tab
within your SmartCloud Enterprise account.
DevOps has become something of a buzzword lately but the idea behind it can be truly powerful. Using a combination of technology and best practices to increase collaboration between development and operations teams can accelerate the application development lifecycle while improving software quality and reducing costs.
Here’s how IBM is addressing DevOps, with the launch of SmartCloud Continuous Delivery--an agile, scalable and flexible solution for end-to-end lifecycle management that allows organizations to reduce software delivery cycle times and improve quality. Learn more: http://ibm.co/UeAl0B
The challenges of managing virtualized environments are mounting. The benefits of virtualization—from cost and labor savings to increased efficiency—are being threatened by its staggering growth and the resultant complexity. A critical piece to solving these challenges, as many organizations have already discovered, is image management. Read more: http://ibm.co/SpHTlV
Orchestration can be one of those ambiguous concepts in cloud computing, with varying definitions on when cloud capabilities truly advance into the orchestration realm. Frequently it’s defined simply as automation = orchestration.
But automation is just the starting point for cloud. And as organizations move from managing their virtualized environment, they need to aggregate capabilities for a private cloud to work effectively. The automation of storage, network, performance and provisioning are all aspects handled in most cases by various solutions that have been added on over time as needs increase. Even for organizations that take a transformational approach -- jumping to an advanced cloud to optimize their data centers -- the management of heterogeneous environments with disparate systems can be a challenge not simply addressed by automation alone. As the saying goes, “If you automate a mess, you get an automated mess.”
With the proliferation of cloud computing, many businesses are starting
to adopt a service provider model—either as a deliberate strategy to
establish new revenue streams or, in some cases, inadvertently to
support the growing needs of their organizations. This is especially
true for companies with diverse needs, whether they’re tech companies
with dev teams churning out new apps and services, or business owners
driving requirements for SaaS services and cloud capabilities to enhance
their data center operations.
Computing is a term that is often bandied about the web these days and
often attributed to different things that -- on the surface -- don't
seem to have that much in common. So just what is Cloud Computing? I've
heard it called a service, a platform, and even an operating system.
Some even link it to such concepts as grid computing -- which is a way
of taking many different computers and linking them together to form one
very big computer.
basic definition of cloud computing is the use of the Internet for the
tasks you perform on your computer. The "cloud" represents the Internet.
Cloud Computing is a Service
The simplest thing that a computer does is allow us to store and
retrieve information. We can store our family photographs, our favorite
songs, or even save movies on it. This is also the most basic service
offered by cloud computing.
a great example of cloud computing as a service. While Flickr started
with an emphasis on sharing photos and images, it has emerged as a great
place to store those images. In many ways, it is superior to storing
the images on your computer.
Flickr allows you to easily access your images no matter where you are
or what type of device you are using. While you might upload the photos
of your vacation to Greece from your home computer, you can easily
access them from your laptop while on the road or even from youriPhone while sitting in your local coffee house.
Second, Flickr lets you share the images. There's no need to burn them to a compact disc or save them on a flash drive. You can just send someone your Flickr address.
Flickr provides data security. If you keep your photos on your local
computer, what happens if your hard drive crashes? You'd better hope you
backed them up to a CD or a flash drive! By uploading the images to
Flickr, you are providing yourself with data security by creating a
backup on the web. And while it is always best to keep a local copy --
either on your computer, a compact disc or a flash drive -- the truth is
that you are far more likely to lose the images you store locally than
Flickr is of losing your images.
This is also where grid computing comes
into play. Beyond just being used as a place to store and share
information, cloud computing can be used to manipulate information. For
example, instead of using a local database, businesses could rent CPU
time on a web-based database.
downside? It is not all clear skies and violin music. The major
drawback to using cloud computing as a service is that it requires an
Internet connection. So, while there are many benefits, you'll lose them
off if you are cut off from the Web.
Cloud Computing is a Platform
The web is the operating system of the future. While
not exactly true -- we'll always need a local operating system -- this
popular saying really means that the web is the next great platform.
a platform? It is the basic structure on which applications stand. In
other words, it is what runs our apps. Windows is a platform. The Mac OS
is a platform. But a platform doesn't have to be an operating system.
Java is a platform even though it is not an operating system.
Through cloud computing, the web is becoming a platform. With trends such as Office 2.0,
we are seeing more and more applications that were once the province of
desktop computers being converted into web applications. Word
processors like Buzzword and office suites likeGoogle Docs are
slowly becoming as functional as their desktop counterparts and could
easily replace software such as Microsoft Office in many homes or small
But cloud computing transcends Office 2.0 to deliver applications of all shapes and sizes fromweb mashups to Facebook applications to web-based massively multiplayer online role-playing games.
With new technologies that help web applications store some information
locally -- which allows an online word processor to be used offline as
well -- and a new browser called Chrome to push the envelope, Google is a major player in turning cloud computing into a platform.
Cloud Computing and Interoperability
A major barrier to cloud computing is the interoperability of
applications. While it is possible to insert an Adobe Acrobat file into a
Microsoft Word document, things get a little bit stickier when we talk
about web-based applications.
is where some of the most attractive elements to cloud computing --
storing the information on the web and allowing the web to do most of
the 'computing' -- becomes a barrier to getting things done. While we
might one day be able to insert our Google Docs word processor document
into our Google Docs spreadsheet, things are a little stickier when it
comes to inserting a Buzzword document into our Google Docs spreadsheet.
for a moment that Google probably doesn't want you to have the ability
to insert a competitor's document into their spreadsheet, this creates a
ton of data security issues. So not only would we need a standard for
web 'documents' to become web 'objects' capable of being generically
inserted into any other web document, we'll also need a system to
maintain a certain level of security when it comes to this type of data
Possible? Certainly, but it isn't anything that will happen overnight.
What is Cloud Computing?
brings us back to the initial question. What is cloud computing? It is
the process of taking the services and tasks performed by our computers
and bringing them to the web.
What does this mean to us?
With the "cloud" doing most of the work, this frees us up to access the
"cloud" however we choose. It could be a super-charged desktop PC
designed for high-end gaming, or a "thin client" laptop running the
Linux operating system with an 8 gig flash drive instead of a
conventional hard drive, or even an iPhone or a Blackberry.
can also get at the same information and perform the same tasks whether
we are at work, at home, or even a friend's house. Not that you would
want to take a break between rounds of Texas Hold'em to do some work for the office -- but the prospect of being able to do it is pretty cool.
Now 400 millions research papers are available for peace solution,but there is no result for the same,unless the messages posted in the website http://www.goldenduas.com are researched by all the researchers in the world.Otherwise the world cannot peace and unity for the following reasons.
Thank you very much joining with me in the interest of public,Safety and peace in the world.Most of my friends and followers are youngsters and good educated persons involving peace,Unity and safety amongst all communities in the world and accordingly we sought support from all of you to study and analyse the God's messages posted in the website www.goldenduas.com and same may be advertised all over the world on the reasons that every person are suffering,due to all kind of naturalcalamaties in the world.Unless God's messages posted in the website www.goldenduas.com are followed,otherwise No government and Scientist can safeguard life and liberity of the public of the all communities in the world according to Quranic verses 17:16 and 28:59.Internet services in the world and requesting support us to spread our website messages to each and every corner of the world to know and discuss by all the internet communities in the world. Holy Bible says: 1."Behold, I send you forth as sheep in the midst of wolves: be ye therefore wise as serpents, and harmless as doves". - Matthew 10:16. 2."Be strong, do not fear; your God will come, he will come with vengeance; with divine retribution he will come to save you". - Isaiah 35:4 Holy Quran says: 28:59. Nor was thy Lord the one To destroy a population until He had sent to its Centre An apostle, rehearsing to them Our Signs; nor are We Going to destroy a population Except when its members Practise iniquity. Our website http:www.goldenduas.com contains more information not only to avoid all kinds of natural calamities in the world but also to12:15 improve economic growths in business, education, employment, jobs, health, wealth, security, faith, climate changes (heavy snow,rain,heat etc),and causes unity and peace all over the world.Our service all over the world is a non-profitable service to all mankind and animals.
Please check our homepage of the website to know our services. Otherwise, the public of the world will suffer due to all kind of natural calamities till the day of resurrection and also they will fail to improve in economy in businesses,unity,peace,education,health,wealth,security,faith and also climate changes.
Organizations looking to optimize across the application lifecycle recognize the need for enhanced innovation and speed to market. Yet most IT resources are focused on covering the basics, leaving fewer resources to support business agility. The solution: Platform as a Service (PaaS).
IBM’s PaaS solution, IBM SmartCloud Application Services, or SCAS, allows clients to differentiate themselves with built-in flexible services that allow them build and customize cloud solutions their way – leading to a competitive advantage. Companies are using enterprise-class IBM Application Services to measure and respond to market demands, capture new markets, and reduce application delivery and management costs.
What are the benefits of a PaaS solution?
First, with IBM Collaborative Lifecycle Management Service, included within SCAS, development teams can establish shared team development environments in minutes – before it used to take weeks. Within hours they can quickly define their development team and begin working collaboratively to respond to business needs.
Another significant benefit of a PaaS approach is the time it takes to get an application deployed and to market. Application deployment can take weeks on a traditional environment but with IBM SmartCloud Application Services, applications can be deployed to the cloud in minutes.
SCAS also allows clients to respond rapidly to changing market conditions by deploying or modifying cloud-centric (“born on the cloud”) or cloud-enabled (legacy applications) quickly and easily. In fact, developers can move from the dev/test environment directly into production with SCAS, taking advantage of proven repeatable patterns contained within the SmartCloud Application Workload Service, thus eliminating human error. These repeatable patterns allow clients to eradicate errors by avoiding manual processes – this drives consistent results, increases productivity, and reduces risk.
IBM SmartCloud Application Services are compatible with the newly announced IBM PureSystems family. For example, through SmartCloud Application Services clients can rapidly design, develop, and test their dynamic applications on IBM's public cloud and deploy those same application patterns on a private cloud built with PureApplication Systems, or vice versa.
Want to try IBM’s PaaS . . . for free*? IBM SmartCloud Application Services is now in pilot and accepting new client who want to get ready to accelerate their cloud initiatives. Clients won’t pay for SCAS services during the pilot, but will only be charged for the underlying *SmartCloud Enterprise infrastructure used by the services (that’s because SCAS runs on top of IBM’s Infrastructure as a Service offering, SmartCloud Enterprise, or SCE). Existing SCE customers can get up and running on the pilot quickly and start realizing the benefits of PaaS right away.
To be considered for the program, new or existing SCE customers should IBM SmartCloud Application Services web site and click the button on the right titled, “Get a jump on the competition with the SmartCloud Application Services pilot program.”
Who is using IBM SmartCloud Application Services? CLD Partners, a leading provider of IT consulting services with a particular focus on cloud computing, began using SCAS during the beta which launched in 2011 and has now transitioned into the pilot program.
“We share IBM’s vision for how enterprise customers can achieve huge productivity gains by embracing cloud technologies. SCAS allowed us to utilize world class software in a managed environment that greatly reduced the complexity of the deployment while also providing for future scalability that our customers only pay for when they need it,” said Steve Clune, Founder and CEO of CLD Partners. “Ultimately, traditional infrastructure planning and configuration that would have required weeks was literally reduced to hours. And future flexibility as infrastructure needs change is virtually limitless.”
Who would be interested in the SmartCloud Application Services pilot program? IT Operations, Independent Software Vendors (ISVs), Line of Business, and Application Developers would benefit from the SCAS pilot program. And it doesn’t matter the company size, enterprise or mid-market; all types of businesses can realize value from getting their applications to market faster.
One of the exciting and valuable characteristics of IBM SmartCloud Enterprise is it's tight linkage with the IBM Software Group portfolio of offerings. In addition to the offerings from IBM Software Group, innovative software vendors are making exciting offerings available as well. There is an ever-growing list of offerings available to IBM SmartCloud Enterprise customers. These recent additions are now in the SmartCloud Enterprise public catalog and available to you to use.
BYOL - Bring Your Own License; PAYG - Pay As You Go
IBM Business Process Manager is a comprehensive BPM platform giving you visibility and insight to manage business processes. It scales smoothly and easily from an initial project to a full enterprise-wide program. IBM Business Process Manager harnesses complexity in a simple environment to break down silos and better meet customer needs.
The following BPM images are now available in the catalog:
IBM Process Center Advanced 7.5.1 64b - BYOL IBM Process Center Standard 7.5.1 64b - BYOL IBM Integration Designer 7.5.1 64b - BYOL IBM Process Server Advanced 7.5.1 64b - BYOL IBM Process Server Standard 7.5.1 64b - BYOL IBM Process Designer 7.5.1 64b - BYOL, PAYG IBM BPM Express 7.5.1 64b - BYOL, PAYG
IBM WebSphere Service Registry and Repository (WSRR) is a system for storing, accessing and managing information, commonly referred as service metadata, used in the selection, invocation, management, governance and reuse of services in a successful Service Oriented Architecture (SOA). In other words, it is where you store information about services in your systems, or in other organizations' systems, that you already use, plan to use, or want to be aware of.
The following WSRR images are now available in the catalog:
IBM WebSphere Service Registry 64bit BYOL IBM Image IBM WebSphere Service Registry 184.108.40.206 64bit BYOL
IBM WebSphere Message Broker (WMB) delivers an advanced Enterprise Service Bus (ESB) that provides connectivity and universal data transformation for both standard and non-standards-based applications and services to power your SOA.
The following WMB images are now available in the catalog:
IBM WebSphere Message Broker 220.127.116.11 64b BYOL
IBM SPSS Decision Management enables business users to automatically deliver high-volume, optimized decisions at the point of impact to achieve superior results.
The following SPSS image is now available in the catalog
IBM SPSS Decision Management 6.2 64b BYOL
From our partner Riverbed comes Riverbed® Stingray™. This software-based application delivery controller (ADC) designed to deliver faster and more reliable access to public web sites and private applications.
The following Riverbed Stingray images are now available in the catalog:
Riverbed Stingray V 8.0 RHEL 6 32 bit BYOL Riverbed Stingray V 8.0 RHEL 6 64 bit BYOL Riverbed Stingray V 8.0 SLES 11 SP1 32 bit BYOL Riverbed Stingray V 8.0 SLES 11 SP1 64 bit BYOL
Additionally, Alphinat SmartGuide provides visual, drag and drop tools that can help you quickly build interactive web dialogues that guide people to the relevant response, help them diagnose problems or lead them through a series of well-defined steps that make it easy to complete complex—or infrequently performed—tasks.
The following Alphinat SmartGuide images are now available in the catalog:
GridRobotics' Cloud Lab Grid Automation Server can manage any number of client or agent computers, which can be spun up automatically on public clouds like IBM SCE or private clouds. Grid Robotics’ Cloud Lab Classroom is a virtual classroom management solution.
The following GridRobotics Cloud Lab images are now available in the catalog:
GridRobotics Cloud Lab Grid Automation Base Server 1.4 32b R2 - BYOL GridRobotics Cloud Lab Classroom Base Server 1.4 32b R2 - BYOL
GridRobotics Cloud Lab Base Agent V 1.4 32b R2 - BYOL
computing tests the limit of security operations and infrastructure from
various perspectives. Let us examine what
is different about Cloud Security and identify what are existing threats and what
are the new areas that we should be concerned about.
Figure 2 Cloud Security - Existing & New Threats
I think what make cloud security complex is the number of
layers involved in the cloud service stack and the number of components in each
layers. So it means
·Increased infrastructure layers to
manage and protect
·Multiple operating systems and
applications per server
More Components = More Exposure
As we can see we already do perimeter protection at the
network and operating systems as well as do physical and personnel security for
the traditional infrastructure. All of them holds good for cloud as well to combat
the existing threats at these layers.
us examine what are the new points of exposure with cloud. Security and resiliency complexities are raised
by virtualization and automation which are essentials to cloud. The new risks
·Cloud Service Management Vulnerabilities
·Secure storage of VMs and the
·Managing identities on the
increasing number of virtual assets
·Stealth rootkits in hardware now possible
·Virtual NICs & Virtual Hardware
·Virtual sprawl, VM stealing
·Dynamic relocation of VMs
·Elimination of physical boundaries
·Manually tracking software and
configurations of VMs
managing these additional complexities, you need a reference model that is
comprehensive and covers security controls that can combat not only the
existing challenges but also the new challenges that cloud brings in.
Foundational Security controls for IBM cloud reference model (see below)
provides the different elements and controls required to build a secure cloud.
Figure 1 Foundation Security Controls for IBM Cloud
Managing datacenter identities (Identity and access
Management) is one of the top-most security concerns and we discussed how to
handle the same in my previous
post. I’ll discuss how to handle the
virtualization related threats in my next post.
Meanwhile let me know your comments on this reference model.
Do you think these set of controls are comprehensive. Do you see any areas not
covered from a cloud security perspective? If so, just add it as comment to
this post and let us discuss.
Join us for the 2012 IBMSmartCloud
Symposium event on 16-19 April 2012 in San Francisco, California. This
Symposium will help you Rethink IT and Reinvent Business.
event will introduce Cloud Computing’s disruptive potential to not only
reduce cost and complexity but reinvent the way we do business. Over the
course of four days, there will be sessions that define cloud computing
and discuss transformative benefits and challenges to consider while
sharing specific, proven patterns of success. We will provide proven
methods to get started on the Cloud journey from the up-front
investments to capacity planning. This event will cover the technology
behind private and public clouds whether you choose to build your own,
leverage prepackaged solutions or have it delivered as a service.
will explore challenges and solutions for securing, virtualization and
performance of mission critical applications as well as automating
service delivery processes for cloud environments. We will help you:
design, deploy and consume.
challenges for cloud , I discussed Security as the top concern. I also
detailed the top concerns with regard to securing the cloud in the subsequent post.
Cloud computing tests the limits of security operations and infrastructure for
the various security and privacy domains
Cloud brings in lot of additional considerations like
multi-tenancy, data separation, virtualization etc. In a cloud environment,
access expands, responsibilities change, control shifts, and the speed of
provisioning resources and applications increases - greatly affecting all
aspects of IT security.We will discuss
the different security aspects classifying them against specific adoption
patterns (see post here).
The cloud enabled data center pattern is the more predominant one which has Infrastructure
and Identity management as the top concerns.Within cloud security doing the right design
for the infrastructure security is the important aspect – the details of which
and how it is done by different public clouds we discussed in the previous post.
Now with regard to Identity lets discuss the top requirements, use cases and
look at what solutions that we can provide to make the cloud secure. Lets start
with managing datacenter identities which is the top concern.
Managing Datacenter Identities
Identity and Access Control needs to deliver capability that
can be used to provide role based access to securely connect users to the cloud.
The users include the cloud service provider as well as consumer roles. Within
each user groups we need to support User as well as Administrator Roles. The
identity and access management should the 4As - Authentication, Authorization,
Auditing and Assurance.
§For a cloud consumer user, it is
about making sure the user identity is verified and authenticated at the self
service portal and providing right access to the resource pools.
§For the administrator, we need to
provide role based access to Service Lifecycle Management functions
§We will need to integrate with
existing User Directory infrastructure (AD/LDAP/NIS) to extend the user
identity to the cloud environment as well.
§Once in the cloud environment, we
need to automatically manage access to the cloud resources, through provisioning
and de-provision of resource profiles and users against the resources in the cloud
identity and access management systems. Manual processes to manage accounts for
users on various virtual systems and applications are not going to scale in a
cloud environment. The same is true with the manual processes to process
various audit logs to meet compliance and audit requirements
§In massively parallel,
cloud-computing infrastructures involves enormous pools of external users as
well. We need to ensure smooth user experience for the users so that they don’t
need to enter their credentials multiple times to access various applications
hosted within the enterprise or by business partners and Cloud providers.
§Management of user identities and
access rights across hosted, private and hybrid clouds for internal Enterpise
users is also a major challenge that includes
oCentralized user access management to on and off-premise applications
oEnables Federated Single Sign-on and Identity Mediation across
different service providers
Lets look at some of the capabilities that we can leverage
to solution these requiremnts.
IBM Security Identity and Access Assurance - provides
the following capabilities.These
capabilities enable clients to reduce costs, improve user productivity,
strengthen access control, and support compliance initiatives.
and policy-based user management solution that helps effectively manage
Enterprise, Web, and
federated single sign on, inside, outside, and between organizations,
including cloud deployments.
and access support for files, operating platforms, Web, social networks,
and cloud-based applications.
with stronger forms of authentication (smart cards, tokens, one-time
passwords, and so on).
monitoring, investigating, and reporting on user activity across the
Tivoli Identity Manager complements its role management
capabilities with role mining and lifecycle management, provided by the
IBM Security Role and Policy Modeler component, which helps reduce time
and effort to design an enterprise role and access structure, and
automates the process to validate the access information and role
structure with the business.
Security Access Manager for Enterprise Single Sign-On offers wide
platform coverage, strong authentication enhancements, and simpler
deployments.It introduces 64-bit
operating system and application support, a virtual appliance for easier
installation and configuration of the server, expanded support for smart
cards, and simplified profiling.
Tivoli Federated Identity Manager offers additional Open Authorization
(OAuth) authorization standards support, (for business to consumer
deployments and utilization of cloud-based applications and identities),
enhanced security for Secure Hash Algorithm (SHA-2), usability
enhancements, and new Business Gateway capabilities.
As we discussed in my previous post, transparency or more
control is need of the hour with regards to security on the cloud.Let examine how this is done by the popular
cloud providers and understand the method and the technologies. We need to
secure the infrastructure, network, endpoints, applications, processes, data,
and information and overall have a governance to mitigate the risk and meet the
compliance. Let us take the infrastructure to begin with.
The key areas for a security team to design for with regards
to infrastructure security are
logs on all resources – VMs and hypervisors
Let us start looking at the public cloud implementations to
understand how they are managing these aspects.
Almost all the vendors – IBM, Amazon,
provide a means to do SSH with keys to the Guest OS. The protocol runs over SSL
and is authenticated with a certificate and private key which could be
generated by the customer.
SmartCloud is designed with enterprise security as a top priority. Access
to the infrastructure self-service portal and application programming interface
(API) is restricted to users with an IBM Web Identity. The infrastructure
complies with IBM security policies, including regular security scans and controlled
administrative actions and operations. Within our delivery centres, customer
data and virtual machines are kept in the data centre where provisioned, and
the physical security is the same as that for IBM’s own internal data centres.With virtual private network (VPN) option,
customers can isolate their servers in the IBM SmartCloud on a virtual local
area network (VLAN) that can act as an extension of their internal network.
This VPN capability can also be used to create security zones in an Internet-facing
configuration to better protect their servers against attacks.
roles across LotusLive and their access authorizations are recorded in a
Separation of Duty matrix.
security-rich infrastructure: Security configuration reviews
and periodic vulnerability scanning of all systems and infrastructure.
enforcement points providing application security: multi-layered
compliance with periodic programs that address all elements of the service
We will see how the infrastructure
security aspects are dealt with for private clouds in my next post. Stay tuned
and keep those comments coming. I’d some of my readers tell me that the blog
entries are not showing up fine on Internet explorer. While I will make the
effort to fix the issue, please use Firefox or any other browser in the
And if you these posts interesting dont forget to rate the post (click on the stars) and if you got an extra minute do put in a comment on what apsects you find interesting or need discussion.
IT Security is well researched and
matured area. The reason why we have enterprises doing commerce over the web
today is because IT Security practices, tools and technologies have matured to
establish the trustand have overcome the
concerns. As with most new technology paradigms, security concerns surrounding
cloud computing have become the most widely talked about inhibitor of
widespread usage as discussed in my previous post.
To gain the trust of organizations,
cloud services must deliver security and privacy expectations that meet or
exceed what is available in traditional IT environments. Let us discuss what’s are
the Top Security Concerns when it comes to cloud.
Transparency or Less Control
If we look at the security and
privacy domains in cloud, they are no different from the traditional domains.
We need to secure the infrastructure, network, endpoints, applications,
processes, data, and information and overall have a governance to mitigate the
risk and meet the compliance. But in a cloud environment, access expands,
responsibilities change, control shifts, and the speed of provisioning
resources and applications increases - greatly affecting all these aspects of
IT security. The different cloud deployment models like the public, private and
hybrid clouds also change the way we think need to about security. The
responsibilities are spread across Consumer, Service Resellers and Providers.
The immediate risks of these shared responsibility is that nobody gets a
holistic view of the security and so less customization of any security
controls. Consumers need visibility into day-to-day operations as well as need
access to logs and policies. The aspect of less visibility or transparency is
mostly the top most concern shared universally.
Data and Information Security
The next primary concern that
customers mention related to security on the cloud is related to data and
information security. The specific concerns include
§Protection of intellectual property and data
§Ability to enforce regulatory or contractual obligations
§Unauthorized use of data
§Confidentiality of data
§Availability of data
§Integrity of data
A shared, multi-tenant
infrastructure increases potential for unauthorized exposure especially in the
case of public-facing clouds. Security Administrators need to worry about
designing security for applications and data that are publically exposed which
can be potentially accessed by anybody on the internet.
Different industries and geographies have different regulations
and rules that they need to comply to depending on the workloads and data they
put on the cloud. Complying with SOX,
HIPAA and other regulations are one risk or issue because of which customers
are not ready to put their applications on the cloud. Cloud or no cloud for
these sort of workloads comprehensive auditing capabilities are essential.
Security Management - Methods and Tools
Finally customers would need to know how today’s enterprise
security controls are represented in the cloud.They need to understand how the security events are monitored correlated
and actions taken when needed to keep their infrastructure, workload and data
safe. Security coming on the way of high availability is another key
concern.IT departments worry about a
loss of service should outages occur because of security reasons. If so, when
running mission critical applications how soon you can get the environment back
at the same level of security is the priority.
Until all of these concerns are addressed and without strong
availability guarantees, customers may not be ready to run their apps in the
cloud. But things are not that bad as we might think. We will discuss how these
aspects can be addressed and what tools and technologies to put to use in the
Cloud Security – The top most concern and Opportunity
First of all, wishing all my readers a
very happy and prosperous year 2012 ahead.
Few things happened towards the end
of the year which was significant to me. IBM acquired Q1 Labs to Drive Greater Security Intelligence and created a New Security Division. I also joined this
newly formed IBM Security Systems team last quarter as a solution architect for cloud security. This is a great time to be looking at cloud security. Happy to be on this new role where I can provide solution to customers to handle their cloud security concerns and make it easy for them to adopt cloud and innovate at a faster rate than before.
In my previous
post, we discussed security as the top most concern why customers and
enterprises are not adopting cloud.As
part of year’s posts, I plan to discuss the various security issues and aspects
of cloud computing.
We will explore to understand what are
the unique challenges with Cloud Security and discuss what aspects is important
for each customer
adoption pattern that we have seen.
We will also learn how the IBM Security
Framework can be used to address the various security challenges namely
forward to your comments and inputs in this journey of understanding the
security requirements for cloud and how we can overcome this major challenge to
cloud adoption using the World’s Most Comprehensive Security Portfolio – IBM
Security Systems. I’ll
try and elaborate the IBM Point of View on cloud security and discuss the architectural
model to address the security requirements for cloud. Stay tuned and keep those comments and inputs coming.
With the barrage of cloud news constantly hitting the market, it can be challenging for organizations to differentiate between all of the solutions and capabilities out there.
But with the latest cloud offering from IBM, the value proposition is quite simple—you get a low-cost, low-risk entry to cloud computing with compelling features. This is especially important for organizations who are still trying to leverage the cost savings of virtualization.
Our customers have told us they’re looking to cloud computing to increase agility—the ability of IT to evolve and meet business needs—and they’re looking for ways to control expenses related to IT investments. They also want to reduce IT complexity while at the same time increase utilization, reliability and scalability of IT resources. And they are looking for the ability toexpand capabilities gradually, as their needs change and grow.
In designing a solution to meet all of these needs, we developed IBM SmartCloud Provisioning. Using industry best practices for cloud deployment and management, this new solution allows organizations to quickly deploy cloud resources with automated provisioning, parallel scalability and integrated fault tolerance to increase operational efficiency and respond to user needs.
The name doesn’t tell the whole story though. IBM SmartCloud Provisioning is a full-featured solution wrapped up in an easy-to-implement package. That means you get:
·Rapidly scalable deployment designedto meet business growth
·Reliable, non-stop cloud capable of automatically tolerating and recovering from software and hardware failures
·Reduced complexity through ease of use and improve time to value
·Reduced IT labor resources with self-service requesting and highly automated operations
·Control over image sprawl and reduced business risk through rich analytics, image versioning and federated image library features
Using this technology, we’ve seen customers get a cloud up and running in just hours—realizing immediate time to value. It’s fast—administrators have been able to go from bare metal to ready-for-work in under five minutes, or start a single VM and load OS in under 10 seconds, or scaleup to 50,000 VMs in an hour (50 nodes).
But ultimately, these IT benefits have translated to business benefits—customers have been able to see how cloud computing can impact their business, and how they can accelerate the delivery of new services to drive revenue.
With the new release of IBM SmartCloud Provisioning this week, you can try and see firsthandthe potential of this breakthrough technology to accelerate your journey to cloud.
And if you want a preview of what’s in development, you can join our Open Beta program for access to beta-level code.
While I’m writing this blog, the Ministers of Tamil Nadu and
Kerala are having a meeting
with Prime Minister to discuss the contentious issue of Mullaperiayar at length.
For those who don’t know about this issue, this is about the Mullaperiayar Dam in
Mullaperiyar Dam is a masonry gravity dam over River Periyar and operated
bythe Government of Tamil Nadu based on
a 999-year lease agreement. The catchment areas and river basin of River
Periyar downstream include five Districts of Central Kerala, namely Idukki,
Kottayam, Ernakulam, Alappuzha and Trissur with a total population of around
This dam is at the centre stage again in the wake of reports that
the dam is weakening due to increase in incidents of tremor in Idduki district
in Kerala. Ministers from Kerala are seeking Central Government intervention in
ensuring the safety of the dam. At the same time, Tamil Nadu is insisting on
increasing the water level in the reservoir for enhancing water supply to the
state. While Tamil Nadu wants to increase the water-level in the reservoir,
Kerala has been insisting that it be reduced from the current 136 feet to 120
Currently I don’t think we have clear metrics on the exact usage
of water by each state, what is right level of water to be retained by the dam,
what are the risks etc. We have been relying on data that we have from the
However you look at it -- whether too much or not enough,
the world needs a smarter way to think about water. We need to look at the
subject holistically with all the other considerations as well. We use water
for more than drinking. We need to make an inventory of how much water we get
and how is it used – of industries, irrigation, etc.
This is where I think we need smarter ways to manage the water in the best possible way that addresses both states
Smarter Water Management can help us think in a smarter way about water. For
instance IBM is helping
the Beacon Institute to do source-to-sea real-time monitoring network for New York’s Hudson
and St. Lawrence Rivers as well as report on conditions and threats in real
time. There are many other case studies across the globe on IBM Smarter Water
Those interested in the problem and the possible solutions should
definitely read IBM’s broader outlook on Water Management as covered in the Global Innovation Outlook.
for Tomorrow is another interesting partnership between IBM and The Nature
Conservancy. IBM is providing a state-of-the-art support system for a free,
online application that will provide easy access to data and computer models to
help watershed managers assess how land use affects water quality.
Though it's a worldwide entity, water is treated as a regional
issue. I think we should try putting technology to use to solve our water problems.
The solution should be more instrumented, interconnected and intelligent system
that can not only take into consideration the realtime monitoring of the river
but also include early warning systems to notify risks related to earth quakes
etc. IBM’s Strategic
Water Management Solutions include offerings to help governments, water
utilities, and companies monitor and manage water more effectively. The IBM
Strategic Water Information Management (SWIM) solutions platform is both an
information architecture and an intelligent infrastructure that enables
continuous automated sensing, monitoring, and decision support for water
you might be wondering what has this to do with Cloud and why is this post on
cloud computing Central. For these solutions and platforms to be successful it
is highly important that we have energy efficient high-performance computing
platforms and complex sensor, metering, and actuator networks. Such platform
needs and flexible choices of having the solution on-premise as well as
leverage different delivery models can only be supported through a cloud.
I think we should just leverage these solutions on the cloud to
solve this issue and keep all the states and its people happy :-).
In my previous post, we looked at understanding the
different adoption patterns – i.e. how customers are turning towards
cloud.Some of the key reasons of the
“why” are listed below
Ease of deployment
More flexibility in
supporting evolving business needs (both from a technical and business
Lower cost of
Easier way to scale
and ensure availability and performance
Overall ease of use
While all of these are good, there are
still many yet to get on to this cloud computing train. Let’s explore what are
their key concerns or challenges why they are reluctant to jump in. The
following are inputs that I’ve got from various analyst studies and resources
on the internet.
Securityand Privacy- The top most concern that everybody seem to agree
as a challenge with cloud is security. The data security and privacy
concerns ranks top on almost all of the surveys. Cloud computing
introduces another level of risk because essential services are often
outsourced to a third party, making it harder to maintain data integrity
and privacy, support data and service availability, and demonstrate compliance.
Real Benefits / Business Outcome – Though we have several case studies showcasing
the benefits arising out of implementing cloud technologies, some of the
customers are still not convinced on the possible benefits. Their main
concern is how to realize the investment to full potential and make cloud
part of their mainstream IT Portfolio.Enterprises
need to a good view into the real benefits of cloud computing rather than
the seeing the potential of cloud computing to add value. The return on
investment (ROI) on cloud needs to be substantiated by comparing specific
metrics of traditional IT with Cloud Computing solutions that can show
savings that demonstrate cost, time, quality, compliance, revenue and
profitability improvement. The cloud ROI model should include things such
as indicators for comparing the availability, performance versus recovery
SLA, Workload-wise assessments, Capex versus Opex costs benefits,
Service Quality: Service quality is one of the biggest factors that the enterprises
cite as a reason for not moving their business applications to cloud. They
feel that the SLAs provided by the cloud providers today are not
sufficient to guarantee the requirements for running a production
applications on cloud especially related to the availability, performance
and scalability.In most cases,
enterprises get refunded for the amount of time the service was down but
most of the current SLAs down cover business loss. Without proper service
quality guarantee enterprises are not going to host their business
critical infrastructure in the cloud.
Performance / Insufficient responsiveness over
network: Delivery of
complex services through the network is clearly impossible if the network
bandwidth is not adequate.Many of
the businesses are waiting for improved bandwidth and lower costs before
they consider moving into the cloud.Many cloud applications are still too bandwidth intensive.
Integration: Many applications have complex integration needs to connect to other
cloud applications as well as other on-premise applications.These include integrating existing cloud
applications with existing enterprise applications and data structures.
There is a need to connect the cloud application with the rest of the
enterprise in a simple, quick and cost effective way.
I plan to discuss more on what are the
perceived and real threats related to Security and Privacy in my subsequent
posts. In my new role, as an Architect for IBM Security Solutions,
I’ll like to discuss the details on what IBM tools and technologies you could use to overcome the issues.
Meanwhile keep those comments coming and I look
forward to them to understand what other areas you think are key
concerns to be addressed to accelerate adoption of cloud.
The IBM Tech Trends report is out! We asked, you answered. Check out the results of IBM developerWorks' 2011 Tech Trends survey and find out what more than 4,000 IT professionals -- your peers -- have to say about the future of technology, including their opinions on cloud computing, business analytics, mobile computing, and social business.
The report provides insight from the worldwide IT development community into the adoption, preferences and challenges of key enterprise technology trends including cloud, business analytics, mobile computing, and social business. The results also provide guidance on areas where IT professionals like you say they need help with skills to develop new technologies and platforms that will be in demand in the coming years.
As we focus in on cloud, there is absolutely a growing trend in cloud computing to view it as more than just cheap infrastructure. Companies are now exploring the possibility of developing applications in the cloud (you guys are already doing that) many of them related to mobile development.
Currently the biggest challenge is integrating the cloud into application development as the reduction of operating expenses is the driver of this move. We still have a way to go however with 40% of the survey responders saying their company is not yet involved in cloud currently. Hmm, interesting right.
The cool news is that the expectation from those same responders is that over the next two years 75% of the IT professionals responded that they expect that this will change and that theirs and other enterprises will take to building cloud infrastructure.
I did discuss the - The Next Big thing – Cloud enabled
business model Innovation in my previous post. But you may be asking where do I
start.That’s where I guess Cloud
Adoption Patterns work that IBM has pioneered is going to help. This is some
great analysis - Cloud Adoption
Patterns that IBM have done based on thousands of cloud engagements that we
have done so far. This analysis is a good abstraction of the ways organizations
are consuming cloud -- a good starting /entry point discussions on cloud.
The four most common entry points to cloud solutions are discussed in the
picture above. I love these videos on youtube - Cloud Adoption
Patterns that tells you the essence of these patterns in less than 2 minutes.
Data Center – to achieve better return on investment and manage
complexity by extending virtualization well beyond just hardware consolidation.
Solutions on Cloud – to access enterprise-level capabilities through a
provider’s applications running on a cloud infrastructure; to improve
innovation and flexibility while minimizing risk and capital expense.
Service Provider – to innovate with new business models by building,
extending, enabling and marketing cloud services.
For each of these patterns of cloud adoption, we have defined a set of
proven projects that it supports with software, services and solutions to help
businesses streamline the implementation of their chosen cloud capabilities.
While the Cloud
Enabled Data Center pattern is the case for most of the private cloud
implementation. Most customers start with providing infrastructure as a service
on the cloud. This pattern also discusses how we can share infrastructure
across multiple projects and drive benefits.This also discusses a lot of automation in the operation and business
process that’s possible to have a responsive IT department that can help the
business to be agile.
The next level of gain or reuse would be run your workloads on a shared
stack of middleware.Platform
as a Service Pattern is an integrated stack of middleware that is optimized
to execute and manage different workloads, for example, batch, business process
management and analytics. This middleware stack standardizes and automates a
common set of topologies and workloads, providing businesses with elasticity,
efficiency and automated workload management. A cloud platform dynamically
adjusts workload and infrastructure characteristics to meet business priorities
and service level agreements. All the layers below understanding what workloads
are running on top of it and optimizing self is going to help run these
workloads more efficiently and at a lower cost.The Cloud Platform Services adoption pattern can improve developer
productivity by eliminating the need to work at the image level so that
developers can instead concentrate on application development.
solutions pattern maps to the SAAS model where you leverage cloud toinnovate with speed and efficiency to drive
sales and profitability. In these we
look at creating and consuming business solutions on the cloud. Some of the key
offerings in this space are things like business process design, social and
collaboration tools, supply chain and inventory, digital marketing
optimization, B2B integration Services etc. These generic services consumed
from the cloud relieves you of the pain of setting up things from scratch as
well as enable you to scale based on your demands.
Cloud Service Provider (CSP) Pattern is the one that most of the Telcos
adopt when they have to service multiple consumers with a single cloud
solution. We provide tools and technologies to design and deploy highly secure,
multi-tenant cloud services infrastructure that can integrate nicely with
plenty of 3rd party applications.
As we understand it is easy to do the IaaS pattern and more
work to do when we implement SaaS or CSP patterns. But the gain is more when we
do sharing at the software or application level. Depending on where you are in
your current IT Environment, you can pick up and implement any of these
patterns that suit you. The work that we have done to analyse these patterns
and provide a consistent set of technologies and tools to build out these
patterns should make life easy for you. Leverage it –less pain and more to gain.
There's still time to sign up for the IBM webcast: Managing the Cloud – Best practices for cloud service management
Organizations today are looking to cloud computing to deliver cost savings and faster service delivery. However, most organizations are still struggling to have the basic IT infrastructure that is necessary to take the leap to a robust cloud. This session will explain how service management can help provide the essentials to maintain service levels in the cloud and best practices based on IBM's work with customers. This information will provide the foundation for building and managing a cloud to meet your business objectives and transform IT.
The Next Big thing – Cloud enabled business model Innovation
I remember the day when one of our Executives - Nick
Donofrio visited us in India.
He is like the chief mentor for all the members of the IBM technical community
and he has seen IBM and the IT industry for many years. He was addressing a
Technical Exchange event few years ago and then someone in the audience asked
him this question – “Sir , you have seen technology for so many years now – can
you tell us what’s going to be the next big thing in terms of
invention/innovation”. Everyone was all ears waiting for the answer - is it the
next version of the internet, the search, a web2.0 application or may be an
intelligent mobile app. But his answer was that he believes that there is not
going to be any next big thing in technology. The next big thing for all of us
is going to be Business model innovation. Even today his statements holds very
true. Businesses that are able to reinvent their business model are succeeding
and managing to stay on top and others vanish from the scene.
There are lots of innovative and technical things happening all
around us like
and doing more and more using mobile devices
Media – thinning the line between work and life and business having reach
to your social network
Data and its related analytics giving the business insights that were not
possible few years ago.
I believe the next big thing is going to be how well you can
use all these elements for business seamlessly and cost effectively. The key to
succeed is to use technology to do this business model innovation and do it
How do you do it faster ?-- The answer is cloud.This is something that I’m saying based on
the data that IBM has got analyzing over 2000 customers cloud adoption
patterns. All of them have seen the below advantages with Cloud.
Considering all these factors, I think the next big thing is
Cloud Enabled Business Model Innovation. I was able to relate with some of the
latest announcements that we have made in the cloud easily because they are
just restating my same belief.As
discussed in this interesting
video by IBM's Saul Berman (Innovation & Growth Leader), 60% of the
customers that IBM interviewed is saying they would consider cloud immediately
and 70% of the them intend to use cloud to enable business model innovation. Based
on the rate at which they adopt the new technologies they may be an Optimizer (looking
at improving existing model), Innovator (looking at new model) or a Disruptor (who
is ready to bring in game changing ideas).
So as today’s IT leaders, let us broaden our focus from merely
delivering technology to solving larger business issues. One great opportunity for that is to tune in
or be present for the SWG
Universe India 2011.You will get a
chance to listen to some great speakers who will talk about how to use cloud
for business model innovation.
Cloud enabled Business Model Innovation I feel is the next
big thing that could change IT and Businesses. – So come let’s Rethink IT & Reinvent Business
In order for me to be responsive to your reading interests and learning needs, I thought I'll take a short feedback that will help me understand your reactions to my blog. Request your response by taking this short survey. This should not take more than two minutes 30 seconds of your time. It is primarily for me to improve the focus on my blog. Please note that there is nothing official about this survey and all responses are anonymous within the system.
You can see all the blog entries in this category by clicking on the tag "stepbystep" If you liked any entry in the blog, please rate it by clicking on the "star"
or feel free to provide your comments and inputs through this feedback form.
You can access the feedback form here. Look forward to your comments and inputs.
I've been writing about the step by step approach to Cloud
till now. The rate at which I see cloud computing being adopted inside and
outside the Enterprise, I think we really need to get out of our step-by-step
approach and start riding the wave. IBM has implemented may be over 2000
cloud engagements in the last year and are managing over 1 million virtual
machines today.We have identified the
customer cloud adoption patterns and entry points to cloud and have lots of
lessons learnt and experience to share.So won’t it be nice if we could talk to you about the things as well as
share the best practices with you.All
of it is difficult to discuss through a blog. So You have a better option – The IBM Software Universe 2011 – The Next Big Wave.
Yes, the 7th edition of IBM India’s largest annual software
conclave is happening this year Oct 19th and Oct 20th.I believe it would be time well spent to
learn from our learnings and accelerate your adoption of cloud.We have some interesting sessions on Private Cloud [R]Evolution which will
discuss some of the key trends and technologies to look at for building the
cloud insider your firewall. If you are looking to understand how to expand
your existing Data Center capabilities to have better visibility, control and
automation across your physical and virtual environments then “Integrated Service Management – Thinking
Beyond the Data Center” is a must attend session.If you are one of those business or
Enterprise IT Manager who is looking to start with the cloud – you don’t want
to miss the “Get Your Head in the Cloud” session which can tell you how you
could get some of your collaboration requirements from the cloud.
Finally it is wonderful opportunity for you to talk to some
of the Distinguished Engineers and IBM Fellows who can spend 1:1 time with you
to listen about your issues/problems as well as discuss the future roadmap. For
instance, Bala Rajaraman who is the Distinguished Engineer with
responsibilities including the architecture and design for Cloud &
Service Management solutions is going to be in India and it is your opportunity to
catch up with Bala.
Last but not the least, there is going to be Solution Expos
that will be setup for you, so you have a opportunity to touch and feel the
cloud solutions. This should include industry specific demos and
technology/product demos from IBM as well as partners.
So be there on Oct 19, 20that the IBM Software Universe 2011. It is
going to teach you a new skill – How to ride the next big wave… the cloud wave..
Join us for the Managing the Cloud Webcast series to learn more about best practices, technical approaches and capabilities to help solve your business and technical challenges in the cloud. Sign up for these free 1 hour webcasts today.
Organizations today are looking to cloud computing to deliver cost savings and faster service delivery. However, most organizations are still struggling to have the basic IT infrastructure that is necessary to take the leap to a robust cloud. This session will explain how service management can help provide the essentials to maintain service levels in the cloud and best practices based on IBM's work with customers. This information will provide the foundation for building and managing a cloud to meet your business objectives and transform IT.https://www14.software.ibm.com/webapp/iwm/web/signup.do?source=swg-tivoli-nov8managingcloud
As we discussed in the previous post, it is important that the all the
processes work together to bring successful automation in the cloud management
platform.A process workflow automation
engine is what makes this possible. In this chapter we will discuss more about Tivoli process automation
engine that’s form the base for IBM process automation in the cloud space.
process automation engine provides a user interface, configuration services, workflows and the common data system needed
for IBM Service Management products and other services. As we already know IBM
Service Management (ISM) is a comprehensive and integrated approach for
Service Management, integrating technology, information, processes, and people
to deliver service excellence and operational efficiency and effectiveness for
traditional enterprises, service providers, and mid-size companies. Tivoli process automation engine, previously known as Tivoli base services, provides
the base infrastructure for applications like Tivoli Maximo Asset Management,
Change and Configuration Manager Database (CCMDB), Tivoli Service Request
Manager (SRM), Tivoli Asset Management for IT (TAMIT), Tivoli Proivisioning
Manager as well as Tivoli Service Automation Manager. Any product that has the Tivoli process automation engine as its foundation can be
installed with any other product that has the Tivoli process automation engine.
Management that integrates and automates IT management processes
Management that integrates people, processes, information and technology
for real business results
Management to automate tasks to address application or business service
operational management challenges
Through having a common process automation engine, the
we can successfully link Operational and Business services with Infrastructure
through a single (J2EE) platform. We can also leverage current investments
through linking this engine with existing process automation technologies and
products. So by building a unified platform
to automate processes, we have taken data integration to the next level where sharing
data between applications has never been easier.This integrated process automation platform can
support the repeatable IT functions like Incident Management, Problem
Management, Change Management, Configuration Management all the way through to
Release Management. All of these processes tie into the CMDB where they share
consistent data via bidirectional integration. The platform supports best
practices such as ITIL and other Industry best practices. This facilitates an automated approach across
the IT management lifecycle. It's also forms the basis for automating
repetitive tasks that can be handled by the system instead of requiring human (costly)
intervention. TPAE through the adapters provide data federation from multiple
sources that you already have and translating the information into usable data
that can be leveraged by internal process and workflow.
Figure 1 Tivoli
process automation integrated portfolio
Brocade and Avnet Technology Solutions Bring Simplified Server and Desktop Virtualization Solutions to the Channel Through the Brocade CloudPlex Architecture
Integrated Solutions Provide Open, Validated Virtualization Stacks
LAS VEGAS, NV-- (MARKET WIRE) --08/30/11--VMworld 2011 --Brocade(NASDAQ: BRCD) andAvnet Technology Solutions, the global IT solutions distribution leader and operating group ofAvnet, Inc.(NYSE: AVT), today announced the joint development of marketing and enablement support for a new set of multi-vendor, pre-tested and configured virtualization solutions to help reseller partners in the U.S. andCanadadesign and deploy open, efficient and scalable virtualization solutions.
This joint effort will focus on building virtual compute blocks, which are integrated, tested and validated solution bundles comprised of server, virtualization, networking and storage resources and an integral element of the Brocade®CloudPlex™ architecture. This open, extensible architectural framework will help end-user organizations and solutions integrators build the next generation of distributed and virtualized data centers in a simple, evolutionary way.Avnetwill be providing integration services for these solutions, as well as other channel enablement and sales support.
At VMworld, Brocade will be introducing the first success of its joint development efforts withAvnet: a reference architecture and validated solution designed to cost effectively scale virtual desktop infrastructure (VDI) environments to support thousands of clients (or desktops) per solution bundle. VDI technology offers IT managers a simple and highly effective way to manage operating system and application updates throughout their enterprise, one of the most complex and time-consuming tasks they face in today's highly distributed and mobile computing model.
However, to scale properly, enterprise-wide VDI deployments should be built on top of an open, multi-vendor-based architecture that can scale easily and quickly migrate desktops and user data between data centers when needed, all while simplifying the management and configuration processes.
Specifically, this new set of virtualization solutions will incorporate Brocade and VMware networking and hypervisor technologies in conjunction with a number of compute and storage platforms. All virtualization solutions are qualified through a multistage validation process conducted by Brocade andAvnet, enabling value-added reseller and solutions integrator partners to deploy these solutions with confidence and the assurance of interoperability.
"The collaboration between Brocade andAvnetbrings forth best-in-class virtualization solutions designed to address the challenges facing resellers and their end customers today," said Barbara Spicek, vice president of Global Channels at Brocade. "This joint effort underscores Brocade's belief that multi-vendor technology integration must take place in the channel. Whether it be through a world-class distributor likeAvnetwhose integration capabilities are second to none, or via a reseller that specializes in virtualization and cloud technologies, the channel is where the 'rubber meets the road.' Recognizing this, Brocade is committed to working with channel partners to deliver viable, compelling technology offerings that arm resellers with what they need to remain differentiated in an increasingly competitive marketplace."
As the use of virtualization technology becomes increasingly prevalent, end customers are looking to the reseller community to provide the guidance and expertise on how best to deploy and integrate virtualization technology into their operations. The creation of fully validated, pre-configured, open standards-based virtualization technologies presents an attractive value proposition to resellers looking to quickly and seamlessly implement these solutions in multi-vendor IT environments. In addition, resellers will benefit from the technical know-how and enablement resources available throughAvnet'sdata center services offerings.
"While virtualization and cloud computing presents a world of opportunities to reseller partners, it also introduces new levels of complexity into their business and operations," saidScott Look, vice president and general manager of theTechnology Infrastructure Solutions Groupat Avnet Technology Solutions,Americas. "AtAvnet, we are committed to fully supporting our reseller partners and have evolved our business to be able to meet their changing needs through vertical industry-focused integration services, partner enablement programs and go-to-market assistance."
In addition,Avnet'sSolutionsPath® methodology provides resellers a wide range of tools to help them gain relevant technical skills and develop specialized solution-selling capabilities in vertical and technology markets.Avnet'sSolutionsPath includes data center technology practices around mobility, networking, security, storage and virtualization, along with vertical market practices for the energy, finance, government, healthcare and retail industries.Avnetresellers also have access to value-added supplemental resources, such as hosting live, online demonstrations from virtually anywhere by connecting to the Avnet Global Solutions Center inPhoenix, Ariz.
Earlier this year, Brocade andAvnetintroducedAvnet Accelerator, an invitation-only program designed to help resellers tap into the power of Ethernet fabric technologies to assist their customers in simplifying their network architectures and managing the growth of server virtualization and virtual machines inside their data centers. Today's announcement provides yet another opportunity for resellers to capitalize on the sale and deployment of virtualization solutions.
Availability and Additional Details
The Brocade/VMware VDI solution will be available to U.S. andCanadaresellers throughAvnetthis November. Additional multi-vendor, open standards-based virtualization solutions will be available in the following months.
For more information on how Brocade networking technologies can help customers transition to the cloud, please visitwww.brocade.com/cloud
About Brocade Brocade (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where applications and information reside anywhere. (www.brocade.com)
About Avnet Technology Solutions As a global IT solutions distributor, Avnet Technology Solutions collaborates with its customers and suppliers to create and deliver services, software and hardware solutions that address the business needs of their end-user customers locally and around the world. For fiscal year 2011, the group served customers and suppliers in more than 70 countries and generated US$11.5 billionin annual revenue. Avnet Technology Solutions (www.ats.avnet.com) is an operating group ofAvnet, Inc.
AboutAvnet, Inc. Avnet, Inc.(NYSE: AVT), a Fortune 500 company, is one of the largest distributors of electronic components, computer products and embedded technology serving customers in more than 70 countries worldwide.Avnetaccelerates its partners' success by connecting the world's leading technology suppliers with a broad base of more than 100,000 customers by providing cost-effective, value-added services and solutions. For the fiscal year endedJuly 2, 2011,Avnetgenerated revenue of$26.5 billion. For more information, visitwww.avnet.com.
Brocade, the B-wing symbol, DCX, Fabric OS, andSAN Healthare registered trademarks, and Brocade Assurance,Brocade NET Health, Brocade One, CloudPlex, MLX, VCS, VDX, and When the Mission Is Critical, the Network Is Brocade are trademarks of Brocade Communications Systems, Inc., inthe United Statesand/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners.
Brocade Unlocks the Power of the Cloud Through Open, Multi-Vendor Virtual Compute Blocks
Brocade and Its Partners Help Customers Build the Next Generation of Distributed and Virtualized Data Centers in a Simple, Evolutionary Way
LAS VEGAS, NV-- (MARKET WIRE) --08/30/11--(VMworld 2011) --Today at VMworld,Brocade(NASDAQ: BRCD), the leader infabric-baseddata center architectures, today announced significant advancements to the Brocade®CloudPlex™ architecturewith new Brocade Virtual Compute Blocks. These bundled solutions consist of integrated, tested and validated multi-vendor server, virtualization, networking and storage resources. Demonstrating substantial partner traction, the new solutions are available today, delivered and supported in collaboration with a wide range of alliance partners, includingDell, EMC, Fujitsu,Hitachi Data Systemsand VMware.
This open approach is an underlying tenet of the Brocade CloudPlex architecture, which was announced inMay 2011. The open, extensible framework is designed to help customers build the next generation of distributed and virtualized data centers in a simple, evolutionary way that preserves their ability to dictate all aspects of the migration. It is the foundation for integrated compute blocks and it supports existing multi-vendor infrastructure to unify customers' assets into a single compute and storage domain.
"Organizations are seeking to maximize the benefits of cloud computing through more efficient infrastructure procurement, pre-integrated components, faster support response, and greater choice in best-in-class products to meet specific business needs," saidJohn McHugh, CMO of Brocade. "Brocade Virtual Compute Blocks leverage our Ethernet fabrics and industry-leading Fibre Channel SAN fabrics to allow our partners to create integrated stacks that optimize cost effectiveness, flexibility and performance. Because these solutions are open, they allow our customers to scale components independently and better utilize legacy infrastructures."
According to IDC research, "As organizations move to create a dynamic data center enabled by virtualization, they are moving to architectures where server, storage, and network assets are in tighter alignment into converged infrastructures. IDC defines a converged infrastructure as one in which the server, storage, and network infrastructure resources are treated as pools to be assigned as needed to business services... The top benefits organizations achieve by implementing a converged infrastructure are cost savings, simplified management, better availability, increased flexibility, and higher utilization."(1)
Brocade Virtual Compute Block Partner Solutions Brocade Virtual Compute Block solutions include hypervisor software integrated with servers, storage and Brocade fabric networking products in bundled, pre-racked and pre-tested configurations enriched by technology from Dell, EMC, Fujitsu,Hitachi Data Systemsand VMware.
Dell Brocade and Dell have partnered to develop a reference architecture that includes Dell Compellent Fibre Channel storage, Dell PowerEdge servers, Brocade data center and SAN switches and the VMware hypervisor, which is being shown at the Brocade VMworld booth.
"Our reference architecture developed with Brocade demonstrates Dell Compellent's commitment to provide open, cloud-optimized solutions for our customers' increasingly dynamic requirements in Fibre Channel environments," saidPhil Soran, president of Dell Compellent. "Enterprises that deploy this reference architecture benefit from the ability to scale virtualization with their business requirements while deploying industry-leading storage from Dell Compellent and Fibre Channel networking solutions from Brocade."
EMC EMC and Brocade have joined forces with several partners to deliver Virtual Compute Blocks, which combine VMware virtualization software and management tools, EMC® VNXe™ unified storage, servers and integrated Brocade Fibre Channel and Ethernet fabric networking technologies. EMC and Brocade are now working with Arrow, Tech Data, First Distribution and Acao to deliver Virtual Compute Blocks in the U.S., and in parts ofEurope,Africa, andSouth America. These integrated, easy-to-install solutions enable EMC customers to quickly deploy private and hybrid cloud infrastructures, which provide data center consolidation, availability, scalability and automation.
"Our integration work with Brocade is a key enabler for our resellers in providing simplified deployment of Virtual Compute Blocks and further demonstrates our commitment to delivering cloud infrastructure solutions for our mutual customers that help transform data centers into highly efficient and agile environments," saidJosh Kahn, vice president of Solutions Marketing at EMC.
Fujitsu Fujitsu and Brocade have partnered to create solutions supporting Fujitsu's Dynamic Infrastructures architecture, which will help enterprises boost business agility, efficiency and IT economics. These are designed for data centers of the future, delivering powerful automated pools of computing resources made up of server, storage, network and virtualization technology.
"Fabric-based networks are an important requirement to successful deployments of solutions that will enable our customers to accelerate their cloud-based IT initiatives," saidJens-Peter Seick, senior vice president of theProduct Development GroupatFujitsu Technology Solutions. "We are pleased to add Brocade Ethernet fabric technologies to our portfolio, which enhances the long-term partnership we have had in deploying SANs for our customers' virtualized environments."
Hitachi Data Systems Hitachiconverged data center solutionscombine storage, compute and networking, with software management, automation and optimization to automate, accelerate and simplify cloud adoption. As a key networking partner, Brocade provides networking solutions for Hitachi converged data center solutions, including Ethernet switch, Fibre Channel fabric data center switches, and Fibre Channel switch modules for the Hitachi Compute Blade family. Solutions include:
Hitachi solutions built on Microsoft Hyper-V Cloud Fast Track: A combination of Hitachi storage and compute, with Brocade networking and Microsoft Windows Server 2008 R2 with Hyper-V andSystem Centerfor high-performance private cloud infrastructures and an avenue for further automation and orchestration.
Hitachi Unified Compute Platform: An open and converged platform that provides orchestration and management within the portfolio of Hitachi converged solutions for automated dynamic management of servers, storage and networking to create business resource pools from a simple, yet comprehensive interface.
Hitachi Converged Platform for Microsoft Exchange 2010: The first in a portfolio of pre-tested application-specific converged solutions, engineered for rapid deployment and tightly integrated with Exchange 2010's powerful new features for resilience, predictable performance and seamless scalability.
"HDS and Brocade have partnered to deliver tested and proven solutions with tightly integrated storage, compute and networking products that allow our mutual customers to benefit from Ethernet switch and Fibre Channel fabric technologies to create flexible cloud-based infrastructures," saidAsim Zaheer, vice president of Corporate and Product Marketing atHitachi Data Systems. "Through quicker deployment, automation and scalability, Hitachi converged data center solutions help organizations adopt cloud at their own pace and see predictable results and faster time to value."
VMware VMware and Brocade have developed a reference architecture solution that enables organizations to create a scalable virtual desktop infrastructure (VDI) environment.
The VMware/Brocade VDI reference architecture,VMware View™, combines Brocade VDX data center switches and converged network adapters, Intel x-86-based rack servers, iSCSI-based storage and TrendMicro security software.
Benefits of the VMware/Brocade VDI solution include best-in-class performance and scalability, enhanced security, ease-of-migration and lower total cost of ownership.
"VMware and Brocade have collaborated on a joint VDI solution that addresses our customers' needs to improve business productivity though increased performance, secured client access and elimination of business disruptions," saidVittorio Viarengo, vice president of End-User Computing at VMware. "IT organizations can utilize our reference architecture to deploy a quick-start configuration within their data center or at remote locations. In addition, it can be used as a test or development platform for businesses eager to gain the benefits and advantages of virtualizing user desktops."
Avnet Virtual Compute Block Solutions Separately today at VMworld, Brocade and Avnet announced the joint development of marketing and enablement support for a new set of multi-vendor, pre-tested and configured virtualization solutions. The first of these is a reference architecture and validated solution designed to cost effectively scale virtual desktop infrastructure (VDI) environments to support thousands of clients (or desktops) per solution bundle. The VDI bundle will help Avnet reseller partners design and deploy open, efficient and scalable virtualization solutions for their end customers by incorporating Brocade and VMware networking and hypervisor technologies in conjunction with a variety of compute and storage platforms.
About Brocade Brocade (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where applications and information reside anywhere. (www.brocade.com)
Brocade, the B-wing symbol, DCX, Fabric OS, andSAN Healthare registered trademarks, and Brocade Assurance,Brocade NET Health, Brocade One, CloudPlex, MLX, VCS, VDX, and When the Mission Is Critical, the Network Is Brocade are trademarks of Brocade Communications Systems, Inc., inthe United Statesand/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners.
VMware, VMware View and VMworld are registered trademarks and/or trademarks of VMware, Inc. inthe United Statesand/or other jurisdictions. The use of the word "partner" or "partnership" does not imply a legal partnership relationship between VMware and any other company.
In a cloud service provider environment, there are various
business processes and compliance that needs to be addressed before the
environment can go live / operational. The following are the areas that need to
be designed are the following:
the cloud service requirements
list of Infrastructure Services
list of Platform Services
list of Software Services
Definitions & Non Functional Requirements
of Current IT Environment
of External/Existing Systems & Capabilities to integrate with from a
Business Support and Operations Support perspective like Billing,
number of technical environments needed like Demo, Dev, Staging,
Scalability, Capacity Requirements
of the Management Platform
Recovery of Management Platform
Process for managed servers
and Restore of Images
Roles & Locations
& Approver Administration Workflow
needed for Images
Interface Changes Requirement for Self Service Offering.
and Operational Service Model
Operations& SLA Management
& configuration Management
Problem and Defect Management
the management platform
Scale & Growth Workflow
IBM's strategy differentiates from other vendors in that it
is focused on bridging business and IT processes using a common software
framework with common services, including process automation and security
Service Management is built on the Tivoli Service Management
Platform and wrapped with best practices, methodologies and services,
to help you deliver services to your customers effectively and efficiently.
We provide an Integrated Solution that represents the full
management of data, processes, tooling and people. The key differentiator is a
common data model that all the core solutions can share for simple data sharing.
It is important that the all the processes work together. A process workflow
automation engine is what makes this possible. We will discuss more about this
common workflow process automation engine in the next post.
Leading Fuel Card Provider Values Brocade Market Leadership, Reliability and Network Security
SAN JOSE, CA -- (MARKET WIRE) -- 07/19/11 --
Brocade (NASDAQ: BRCD) today announced that FleetCor,
a leading independent global provider of specialized payment products
and services to businesses, commercial fleets, major oil companies,
petroleum marketers and government fleets, has selected Brocade as the
vendor to build its cloud-optimized
network. This new network enhances FleetCor's ability to securely
process millions of transactions monthly and ultimately better serve its
commercial accounts in 18 countries in North America, Europe, Africa and Asia.
Millions of commercial payment cards are in the hands of FleetCor
cardholders worldwide, and they are used to purchase billions of gallons
of fuel per year. Given this volume of network-based transactions, network reliability, scalability and security were critical factors for FleetCor to consider in its selection process to maintain superior customer satisfaction.
In addition, FleetCor selected Brocade as its networking expert to help
evolve its data center and IT operations into a more agile private cloud
infrastructure. Brocade® cloud-optimized networks
are designed to reduce network complexity while increasing performance
and reliability. Brocade solutions for private cloud networking are
purpose-built to support highly virtualized data centers.
"When we evaluated networking vendors to build our private cloud, we
looked at market leadership and non-stop access to critical data," said
Waddaah Keirbeck, senior vice president global IT, FleetCor. "Brocade
cloud-optimized networking solutions are perfect for our data centers
because they allow us to optimize applications faster, virtually
eliminate downtime and help us meet service level agreements for our
customers. Moving to a cloud-based model also provides us the
flexibility to make adjustments on the fly and access secure information
virtually anywhere and anytime."
FleetCor installed a Brocade MLXe router for each of its three data
centers, citing scalability as a major driver for the purchase. This
approach enables FleetCor to virtualize its geographically distributed
data centers and leverage the equipment it already has, at the highest
level, to achieve maximum return on investment. The Brocade MLXe
provides additional benefits for FleetCor by using less power and has a
smaller footprint than competitive routers; critical in power-and
space-constrained locations in order to allow for growth. The Brocade
MLXe also enables continuous business operation for FleetCor based on
Multi-Chassis Trunking, massive scalability supporting highest 100 GbE
density in the industry with no performance degradation for advanced
features like IPv6 and flexible chassis options to meet network and
The Brocade ServerIron ADX
Series of high-performance application delivery switches provides
FleetCor with a broad range of application optimization functions to
help ensure the reliable delivery of critical applications.
Purpose-built for large-scale, low-latency environments, these switches
accelerate application performance, load-balance high volumes of data
and improve application availability while making the most efficient use
of the company's existing infrastructure. It also delivers dynamic
application provisioning and de-provisioning for FleetCor's highly
virtualized data center, enables seamless migration and translation to
IPv6 with unmatched performance.
As an added benefit for its bottom line, through the use of Brocade ADX Series switches and Brocade MLX™ Series routers
FleetCor has eliminated thousands of costly networking cables, saving
it hundreds of thousands of dollars and allowing the company to segment,
streamline and secure its network. FleetCor has also been able to
easily integrate Brocade network technology with third-party offerings
already installed in the network, for complete investment protection.
FleetCor anticipates moving to 10 Gigabit Ethernet (GbE) solutions for
its backbone switch in the near future.
"We wanted a dependable, secure, redundant, 24 by 7 backbone switch in
each of our data centers to help us leverage the benefits of cloud
computing and the Brocade MLXe delivered on all fronts," said Keirbeck.
"By virtualizing our data center, Brocade allows for non-stop access to
the mission-critical data that FleetCor and its customers rely on every
day. We chose the Brocade MLXe because of the tremendous results we
already saw from our existing Brocade solutions and the exceptional
support and service."
According to a report from analyst firm Gartner, "Although 'economic
affordability' is an immediate, attractive benefit, the biggest
advantages (of cloud services) result from characteristics such as
built-in elasticity and scalability, reduced barriers to entry,
flexibility in service provisioning and agility in contracting."(1)
Social Media Tags: Brocade, LAN, Local Area Network, ADX, ServerIron, MLX, MLXe, reliability, scalability, security
(1)Gartner " Cloud-Computing Service Trends: Business Value Opportunities and Management Challenges, Part 1" February 23, 2010
About Brocade Brocade, the B-wing symbol, DCX, Fabric OS, and SAN Health are registered trademarks, and Brocade Assurance, Brocade NET Health,
Brocade One, CloudPlex, MLX, VCS, VDX, and When the Mission Is
Critical, the Network Is Brocade are trademarks of Brocade
Communications Systems, Inc., in the United States
and/or in other countries. Other brands, products, or service names
mentioned are or may be trademarks or service marks of their respective
Brocade, the B-wing symbol, DCX, Fabric OS, and SAN Health are registered trademarks, and Brocade Assurance, Brocade NET Health,
Brocade One, CloudPlex, MLX, VCS, VDX, and When the Mission Is
Critical, the Network Is Brocade are trademarks of Brocade
Communications Systems, Inc., in the United States
and/or in other countries. Other brands, products, or service names
mentioned are or may be trademarks or service marks of their respective
Notice: This document is for informational purposes only and does not
set forth any warranty, expressed or implied, concerning any equipment,
equipment feature, or service offered or to be offered by Brocade.
Brocade reserves the right to make changes to this document at any time,
without notice, and assumes no responsibility for its use. This
informational document describes features that may not be currently
available. Contact a Brocade sales office for information on feature and
product availability. Export of technical data contained in this
document may require an export license from the United States government.
Note: This is a (slightly updated) re-post from a personal blog - just my view in the context of IBM's drive to foster open choice and collaboration. Please bear in mind that this is based on my personal thoughts (not an official IBM position) and read the article as it is intended to be - thought provoking ... enjoy!
Having returned from the European Red Hat Partner summit and the VMware vForum where I presented on behalf of IBM, it took me a while to digest the “openness” of it all …so let me share my thoughts retrospectively. The key messages conveyed in both events were (un?)surprisingly similar, considering that we have a major opens source software company on one side and a more traditional “business” model on the other.
Being proprietary rocks…! (?) Let’s be straight – one could argue that in an ideal world (for selfish, money-making businesses without ethics) there would be no open source, being proprietary rocks! After all making money by attracting and “retaining” clients (I’m deliberately not saying “locking them in”) is ultimately the goal of every business – and that (the attracting/retaining clients bit) actually applies to VMware in the same way as Redhat (if we don't mix up the ‘open source community’ with Redhat as a business) … Now that would obviously completely ignore the power and dynamics of an open technical community but more importantly that’s not in the interest of the consumer… Public cloud promises to empower the consumer – so they will increasingly be looking for choice … no capital dependency, outsourced, pay per use service operation models enable you (in theory!) to switch providers like I just switched my energy and gas supplier to XXX last week – go to a comparison site, find the best deal and “click” … done (obviously not reality today with cloud).
Public cloud can only exist on open source … ? What both events made crystal clear is that increasingly many “traditional” businesses will be forced to have a foot in both camps in order to balance customer demand for open choice with a business model allowing them to make money and retain customer “affinity” (otherwise we probably wouldn't see URLs like this …http://www.microsoft.com/opensource/
There was a bold statement by a speaker at the Red Hat summit: “Public cloud can only live on open source!” I was initially inclined to agree but then thought this through again and adjusted it mentally to what I believe to be more appropriate: “public clouds need to live on INTEROPERABLE source”… Open source should of course help to facilitate this but if I just end up with a bunch of non-intuitive, non-integrated code, with undocumented APIs and outlandish image formats then the fact that its open source doesn’t help me at all. So I am not saying that I don’t believe in open source, actually quite the opposite, all I’m saying is that the “open source” stamp on its own is not good enough and as a consumer of resources (not a developer) I would indeed consider a proprietary solution as long as it is intuitive, cheap, with well-documented APIs and – that is the key – inter-operable with other public providers. So it is important to understand the difference between open source and open standards.
The Public cloud is only as good as the “connectors” to it - Key Battle 1: Hybrid Connectors VMware very much provides the majority of today’s x86 virtual enterprise footprint (a good chunk of that on IBM infrastructure and IBM very closely partnering with VMware). With that VMware has potentially a critical control point in the private cloud. The Public cloud is a completely different story with over 80% being OSS based and VMware yet hardly to be seen! So especially for VMware it must be of utmost importance to provide a ‘best of breed’ connector between existing vSphere infrastructures and public vCloud Director resources before others provide this linkage to other (non-VMware) public platforms. So I expect a lot of focus on vCloud Connector functionality from VMware (in the same way as on ‘Concero’ from Microsoft). VMware’s strategy therefore is to entice Service Providers to take advantage of the existing vSphere footprint “Hey look, many of your customers already have VMware, the only thing you need to do is to provide public vCloud Director resources for them to burst out to – we provide the connector, it’s as simple as this!” Now, that might sound great but the main concern for me (the consumer) simply is how much of a dependency is being created for me by doing this, how easy can I go and "click" to switch to a Amazon, IBM or Rackspace cloud once I am in that environment ...? So there clearly is a chance to develop a public VMware cloud ecosystem around vCD in this way – but how long before someone else offers seamless alternatives (more than just Amazon’s VM Import)? So will it be enough to only provide linkage to publicVMware vCD resources? IMHO absolutely not. I am very curious to see how much VMware will enable connectivity to other public provider platforms going forward … Again, it will be a fine balancing act but I’m convinced that it won’t be successful otherwise.
In the meanwhile keep your eyes peeled and expect the industry to increase focus on enabling hybrid connectors - I obviously can't make any specific forward-looking statements from an IBM perspective. But just take Red Hat as an example, it made clear that CloudForms (their IaaS platform) can indeed manage VMware though their DeltaCloud driver and – while currently positioning CloudForms for private and hybrid – their vision (of course) is for DeltaCloud to be the top-level public layer linking into private (or public) VMware clouds.
Key Battle2: PaaS Now – here’s another (the real?) battle for Cloud control (or better ‘ecosystem control’) … Who will provide the application platform for these future cloud-based applications? Who will control the ecosystem of future application suites? Who will be the next "Microsoft" you might ask? A lot will be control points and the pain of moving. If switching public cloud providers could really be as easy as switching utility providers, switching your application platform (e.g. as ISV) is rather like moving house! Using open standards is a great value proposition here and it’s not just the OSS providers who have realised this …
Red Hat recently announced their hosted “OpenShift” PaaS platform which essentially allows developing and running Java, Ruby, PHP and Python applications and comes in 3 different editions. From 1) “Express” (free) which provides a runtime environment for simple Ruby, PHP and Python apps over 2) “Flex” for multi-tiered Java and PHP apps with more options (like mySQL DBs and JBoss middleware) to full control with the “Power” edition supporting “any application or programming language that can compile on RHEL 4, 5, or 6″ and enables to deploy apps directly on EC2 and (in the near future) to IBM’s SmartCloud.
VMware had before announced their own open “Cloud Foundry” PaaS project, it has incarnations as fully hosted service (currently in beta), as open source project (CloudFoundry.org) or a free single PaaS instance for local development use. An interesting move IMHO which could help the adoption of this layer for VMware (away from e.g. MS Azure, Google’s App Engine or Amazons’ Elastic Beanstalk).
So what's IBM doing in this space? IBM has recently announced the IBM Workload Deployer - an evolution of the WebSphere CloudBurst hardware appliance. It essentially stores and secures "WebSphere Application Server Hypervisor Edition images" and more importantly workload patterns which can be published into a cloud. These workload patterns (think of them as customizable templates that capture settings, dependencies and configuration required to deploy applications) enable you to focus on what essentially differentiates PaaS from IaaS ... the application rather than the infrastructure. Dustin Amrhein explains this much better than me in this little blog. Importantly, all this comes with a REST APIs that allow for standards-based integration into existing environments, including Tivoli. If you have only 10 minutes to spare I can only recommend to watch this great video from Cloud Jason (there are 3 more) ... I promise you will get a really good idea what IWD can do!
Professional Suicide So, yes, I honestly believe that KVM has a good chance to become hypervisor of choice for public cloud. However … that is unlikely to be the control point… . So which management platform(s) will take that all important crown …? Will it be an OSS based one? I don’t want to hazard a guess, there are many …and that is part of the problem, many argue that the open source “communities” will have to overcome a challenge and become a COMMUNITY if they want to succeed. ESX could not be beaten with 7 or 8 different (but weak) flavours of Xen and that was just a single OSS project splintered by commercial offerings … in the same way the sea of OSS based cloud controllers with eucalyptus, openstack, cloudstack, deltacloud, opennebula faces focussed (more proprietary) heavy-weights like Microsoft, Google and Amazon. The increasing number of OSS management solutions and “open bodies” will also make e.g. VMware less nervous than intended as long as they indirectly compete with each other …
BUT (and it’s a big “but”) I would argue that anyone not strategically looking at these open solutions is at best ignorant or – e.g. if you are a service provider yourself – more likely long-term professionally suicidal … yes, in an ideal world everyone wants ‘today’s best of breed' but more critically you have to maintain your negotiation potential through the ability to switch and if only for that reason alone you need to keep your options open! It will be of the utmost importance to partner with solution providers who share this mind-set and have the capability and strategy to support such a long-term goal and yes, IBM is clearly uniquely positioned to fulfill this role. And while I spoke to many completely different clients at both events, that was a common concern raised by most of them.
Industry endorsement like the recent OVA announcement - with IBM being a major driving force and supporter - will help to give KVM the needed credibility and weight … I am looking forward to seeing these visions translated into tangible solutions.
Great Video. There are a great many folks that have already started making the journey into the clouds and are not fully aware; If you consider that most of all large Enterprise Data centers are consolidating and visualizing servers, storage and networking today, and after all, when you get all 3 of those areas consolidating and visualizing you are transforming business processes and will eventually reach a point when infrastructure/information on demand will be the next logical step.
Cloud Service Provider Platform (CSP2) is a carrier grade cloud offering
that contains enhancements over the base ISDM solution to provide a
multi-tenancy environment that allows both internal and external users to exist
on the same cloud and management platforms. IBM's new CSP2 platform provides
cloud services such as desktop management to influence the cloud based business
strategy of communications service providers.
Cloud Service Provider Platform is specifically tailored to the needs of CSPs
and is designed to help them successfully:
Create cloud services that
harness the strengths of a diverse partner ecosystem and rapidly enable
applications and solutions to extend their market reach.
Manage cloud services quickly
and easily with an open, carrier-grade, secure, scalable, automated and
integrated service management solution.
Monetize cloud services by
leveraging business intelligence and analytics to achieve differentiation,
maximize revenue and enhance the customer experience.
Figure 1 IBM Integrated Service
Management Solution for Cloud Service Providers
Communications service providers (CSPs) around the world are
looking for smarter ways of doing business. They are being challenged to
transform the way services are created, managed, and delivered. CSP2 neatly
integrates and extends the SPDE (Service Provider Delivery Environment) for
Communication Service Providers to build the ecosystem to become a cloud
service provider.For a cloud based
business strategy - check out the video from Scott on the
value of CSP2 for CSPs.
In this article learn how to:
Set up a 64-bit Linux instance (a Bronze-level offering) with the Linux Logical Volume Manager (LVM). Capture a private image and provision it as a new Platinum instance. Grow the LVM volume and file system to accommodate the new physical volumes. Configure LVM across physical volumes using Linux LVM-type partitions. Background on LVM and the test scenario
First, a description of LVM concepts and the test scenario for those who may not be familiar with LVM.
Note: You are about to configure Linux LVM: Here be Dragons. Mind the gap.
The Linux LVM is organized into physical volumes (PVs), volume groups
(VGs), and logical volumes (LVs)
Physical volume: Physical HDDs, physical HDD partitions (such as /dev/vdb1).
Extents: PVs are split into chunks called PEs
Logical extents (LEs) map 1:1 to PEs and are used for the physical-to-logical volume mapping.
Volume group: A virtual disk consisting of aggregated
physical volumes. VGs can be logically partitioned into LVs.
Logical volume: Acts as a virtual
disk partition. After creating a VG you can create LVs in that VG.
They can be used as raw block devices, swap devices, or
for creating a (mountable) file system just like disk partitions.
File system: LVs can be used as raw devices or swap, but are more commonly "formatted"
with a supported file system and mounted to defined mountpoint. I'll format the LV as an ext3 file system in this scenario.
Partition table: You'll use tools like fdisk, sfdisk, or
cfdisk to manipulate the block device partition table and create Linux LVM (8e) type partitions.
management platform sizing means sizing for the following components that provides
the functional capabilities
Service Request Management
Service Monitoring &
Service Level Management
Service Usage & Accounting
sizing will be affected based on the non-functional consideration that needs to
be addressed by each of these components of the management platform. One should review the performance reports and workload pattern/handling capabilities of each of the products selected to
validate the sizing considered can meet the non-functional requested by the solution.
The size of the management platform depends on the size of the managed environment. It is
preferred to keep a centralized management environment and scale it as needed
when the managed environment grows. This is often not an easy calculation or simple process. Need to apply pure engineering to plan the capacity for each capabilities. Apart from the capabilities discussed above, the following key areas also needs to be covered
Service Availability Management
In order to size for all these capabilities you need to have answers for some very critical questions. The right sizing and capacity planning depends how good the answers for the following questions can be provided by the project. For example
What operations are expected to be performed with management platform?
What are the average and peak concurrent administrator workloads?
What is the enterprise network topology?
What is the expected workload for provisioned virtual servers, and how do they map to the physical configuration?
For the provisioned servers: What is the distribution size?
What are the application service level requirements?
High Availability (HA) consideration is another important aspect to include in the capacity planning. The management platform has to be designed for HA with appropriate policies defined.
This IBM® Redpaper™ publication introduces PowerVM™ Active Memory™ Sharing on IBM Power Systems™ based on POWER6® and later processor technology. Active Memory Sharing is a virtualization technology that allows multiple partitions to share a pool of physical memory. This is designed to increase system memory utilization, thereby enabling you to realize a cost benefit by reducing the amount of physical memory required.
The paper provides an overview of Active Memory Sharing, and then demonstrates, in detail, how the technology works and in what scenarios it can be used. It also contains chapters that describe how to configure, manage and migrate to Active Memory Sharing based on hands-on examples.
The paper is targeted to both architects and consultants who need to understand how the technology works to design solutions, and to technical specialists in charge of setting up and managing Active Memory Sharing environments. For performance related information, see: ftp://ftp.software.ibm.com/common/ssi/sa/wh/n/pow03017usen/POW03017USEN.PDF
Dubuque, Iowa and IBM Combine Analytics, Cloud Computing and Community Engagement to Conserve Water
DUBUQUE, Iowa, - 20 May 2011:The City of Dubuque and IBM (NYSE:IBM) today announced that the IBM analytics and cloud computing technology deployed in 2010 by Dubuque as part of its Smarter Sustainable Dubuque research helped reduce water utilization by 6.6 percent and increased leak detection and response eightfold.
The Smarter Sustainable Dubuque Water Pilot Study empowered 151 Dubuque households with information, analysis, insights and social computing around their water consumption for nine weeks. By providing citizens and city officials with an integrated view of water consumption, the Water Pilot resulted in water conservation, increased leak reporting rate, and encouraged behavior changes.
Water savings were measured by comparing the consumption of the 151 pilot households with another 152 control group households with identical smart meters but without the access to the analysis and insights provided by the Water Pilot Study for the nine-week duration.
The smarter meter system monitored water consumption every 15 minutes and collected and communicated to the IBM Research Cloud. Data was collected from information including weather, demographics, and household characteristics. Using cloud computing, the data was analyzed to trigger notification of potential leaks and anomalies, and helped volunteers understand their consumption in greater detail. Volunteers were only able to view their own consumption habits while city management can see the aggregate data. All participating homes were volunteers and the data being collected was anonymous and contained no confidential information.
Participating households were alerted about potential anomalies and leaks and were able to get a better understanding of their consumption patterns and, compare and contrast it anonymously with others in the community. Pilot study participants accessed their personal water usage information through a website portal and participated in online games and competitions aimed at promoting sustainable behavior enabling them to become fully engaged and informed about their consumption and the impact of the changes they made to it. Participants were able to see their data expressed in dollar savings, gallon savings and carbon reduction.
A cloud is not a cloud if it is not elastic. The elastic
property of the cloud to expand and shrink based on demand is possible only
with a proper capacity planning. I feel the most difficult exercise to do while
making a cloud solution is capacity planning for your cloud.By this, I mean you have to size
managed environment as well as
Most of the engagements that I’ve walked into might have
some capacity or infrastructure that they want us to leverage and use it in the
cloud.So the comparison becomes
difficult if you don’t have a standard measuring unit for your infrastructure –
for instance how do you know a Quadcore
on an intel platform compares to power7 core. So I found a good explanation in
this guide, in this interesting article –
The answer to the difficult question was to use something
called the cloud CPU unit which is
nothing but the computing power equal to the processing power on a one
gigahertz CPU. When a user requests two CPUs, for example, they will get the
processing power of two 1 GHz CPUs. This means that a system with two CPUs,
each with four cores, running at 3 GHz will have the equivalent of 24 CPU units
(2CPUs x 4Cores x 3GHz = 24CPU Units).
The other dimension of the complexity is to determine the
resource needs and do the trends and forecasting. I typically collect the
projections from the clients and then put down some critical assumptions to
determine how big my cloud should be. Some critical questions that I typically
many concurrent users and peak users and what percentage of these users
needs to be covered?
type of workloads they typically run – development, test ?
image attributes – mem, cpu, storage etc
infrastructure planner for cloud made life easy for me that had a user
friendly interface to take me through these steps and arrive at a sizing for
the managed environment. Once we know
the managed environment, we can make
the sizing of the management platform. The details of how to plan the managed
environment, I’ll discuss in my next post.
I’ll be interested in putting together the top 10 parameters
that are critical for sizing the cloud managed and management environment. Look forward to your comments.
In Collaboration With Ixia, Brocade Will Demonstrate the
Performance, Reliability and Advanced Feature-Set of the Industry's
First 100 GbE Terabit-Trunk Router
LAS VEGAS, NV -- (MARKET WIRE) -- 05/09/11 --
- INTEROP 2011 -- Brocade (NASDAQ: BRCD) today announced that it will work with Ixia
(NASDAQ: XXIA) to replicate mission-critical service provider
environments and test high-capacity Brocade® Ethernet network solutions
designed to help service providers become cloud-optimized. The
demonstration creates a true-to-life service provider infrastructure
scenario for increasing IPv4/IPv6 routing scalability within the core
Multiprotocol Label Switching (MPLS) network while retaining high
service levels for end customers. The demonstration will be held in the
Brocade booth (# 833) during Interop Las Vegas 2011, at the Mandalay Bay Convention Center.
As service providers evolve to become destinations offering cloud-based
services, rather than just basic data delivery, the performance and
scalability demands on their networks have increased significantly. The
Brocade MLXe Core Router is a 100 Gigabit Ethernet (GbE)-ready solution
that enables service providers and virtualized data centers to support
these demands by efficiently delivering cloud-based services that use
less infrastructure and help reduce expenditures.
In this specific demonstration, Brocade and Ixia
will test the IPv4/IPv6 traffic flows, MPLS and throughput capabilities
of the Brocade MLXe multiservice router over 10 and 100 GbE
connections. By leveraging Ixia's leading test solutions, attendees will be able to view the following:
Ixia IxNetwork application emulating a large-scale Layer 3 virtual
private network (VPN) topology surrounding the Brocade MLXe router
Brocade MLXe router maintaining forwarding information and peering relationships
IxNetwork generating line-rate traffic sourced and destined over K2
100 GbE ports and Xcellon-Flex™ 10 GbE ports to fully load the Brocade
MLXe router, showcasing the forwarding plane performance
IxNetwork's real-time flow statistics and detailed reporting tools
validating the scalability of the Brocade MLXe peering sessions and
control plane scalability
Brocade MLXe forwarding low-latency traffic to all destination routes
Repeatable and scalable testing using Test Composer automation built into IxNetwork
About Brocade Brocade (NASDAQ: BRCD) networking solutions help the world's
leading organizations transition smoothly to a world where applications
and information reside anywhere. (www.brocade.com)
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron, SAN Health, ServerIron, TurboIron, and Wingspan are registered trademarks, and Brocade Assurance, Brocade NET Health, Brocade One, Extraordinary Networks, MyBrocade, VCS, and VDX are trademarks of Brocade Communications Systems, Inc., in the United States
and/or in other countries. Other brands, products, or service names
mentioned are or may be trademarks or service marks of their respective
Faced with a nasty loss of credibility, a string of poor financial
results, shrinking market share in its core business, an unwieldy and
alienating bureaucracy blamed for the top executive exodus it been
experiencing, and a stock price that's plunged into the toilet Cisco,
once an economic bellwether, is promising to do more than simply kill
off its once-popular Flip video camcorder business and lay 550 people
off, an admission that its foray into the consumer segment had largely
It said in a press release issued Thursday morning that it's going to
a "streamlined operating model" focused on five areas, not apparently
the literally 30 different directions it's been going in although it did
say, come to think of it, something about "greater focus" so maybe it's
not really cutting back.
These focus areas are, it said, "routing, switching, and services;
collaboration; data center virtualization and cloud; video; and
architectures for business transformation."
Nobody seems to know what that last one is and the Wall Street
Journal criticized Cisco for not being able to explain in plain English
what it's doing and Barron's complained that it needed a Kremlinologist
to decrypt the jargon in the press release.
Anyway Cisco's apparently going to try to simplify its sales,
services and engineering organizations in the next 120 days or by July
31 when its next fiscal year begins. Well, maybe not everything, it
warned, but sales ought to be reorganized by then.
This streamlining seems to mean that:
Field operations will be organized into three geographic regions
for faster decision making and greater accountability: the Americas,
EMEA and Asia Pacific, Japan and Greater China still under sales chief
Services will follow key customer segments and delivery models still under its multi-tasking COO Gary Moore;
Engineering, still reporting to Moore, will now be led by
two-in-a-box Pankaj Patel and Padmasree Warrior and aside from the
company's five focus areas there will be a dedicated Emerging Business
Group under Marthin De Beer focused on "select early-phase businesses"
"with continued focus on integrating the Medianet architecture for video
across the company."
Lastly, it's going to "refine" - but apparently not dismantle its
hydra-headed, decision-inhibiting Council structure blamed for
frustrating and running off key talent - down to three "that reinforce
consistent and globally aligned customer focus and speed to market
across major areas of the business: Enterprise, Service Provider and
Emerging Countries. These councils will serve to further strengthen the
connection between strategy and execution across functional groups.
Resource allocation and profitability targets will move to the sales and
engineering leadership teams which will have accountability and direct
responsibility for business results."
It's unclear whether any of this means layoffs.
Cisco piped in a quote credited to Moore saying. "Cisco is focused on
making a series of changes throughout the next quarter and as we enter
the new fiscal year that will make it easier to work for and with Cisco,
as we focus our portfolio, simplify operations and manage expenses. Our
five company priorities are for a reason - they are the five drivers of
the future of the network, and they define what our customers know
Cisco is uniquely able to provide for their business success. The new
operating model will enable Cisco to execute on the significant market
opportunities of the network and empower our sales, service and
Brocade Introduces Brocade CloudPlex(TM), an Open, Extensible Architecture for Virtualization and Cloud-Optimized Networks
SAN JOSE, CA-- (MARKET WIRE) --05/03/11--Brocade(NASDAQ: BRCD) today introduced a new technology architecture that outlines the company's vision and the technology investments it will make to help its customers evolve their data centers and IT resources and migrate them to the "Virtual Enterprise."
Brocade intends to deliver on this vision through the Brocade CloudPlex™ architecture, an open, extensible framework intended to enable customers to build the next generation of distributed and virtualized data centers in a simple, evolutionary way that preserves their ability to dictate all aspects of the migration. What is unique about the Brocade Cloudplex architecture is that it is both the foundation for integrated compute blocks, but it also embraces a customer's existing multi-vendor infrastructure to unify all of their assets into a single compute and storage domain.
Brocade CloudPlex meets the goal of the Brocade One™ strategy, designed to help companies transition smoothly to a world where information and applications can reside anywhere by delivering solutions that deliver unmatched simplicity, non-stop performance, application optimization and investment protection.
"Virtualization has fundamentally changed the nature of applications by detaching them from their underlying IT infrastructure and introducing a high degree of application mobility across the entire enterprise," saidDave Stevens, chief technology officer at Brocade. "This is the concept of the 'Virtual Enterprise' that we feel unleashes the true potential of cloud computing in all its forms -- private, hybrid and public."
Through the CloudPlex architecture, Brocade will help its customers scale their IT environments from managing hundreds of virtual machines (VMs) in certain classes of servers to tens of thousands of VMs that are distributed and mobilized across their entire enterprise and throughout the cloud. According toGartner, the expansion of VMs not only improves automation and reduces operational expenses, it is the primary requirement for IT organizations to migrate to cloud architectures.(1)
Gartneradvises that, "IT organizations pursuing virtualization should have an overall strategic plan for cloud computing and a roadmap for the future, and should plan proactively. Further, these organizations must focus on management and process change to manage virtual resources, and to manage the speed that virtualization enables, to avoid virtualization sprawl."
CloudPlex Components The Brocade Cloudplex architecture will define the stages and the components from Brocade and its partners that are required to get to the Virtual Enterprise. The stages comprise three main categories -- fabrics, globalization and open technologies -- with some of these components being available today while others are in development or on the roadmap of Brocade's engineering priorities.
The currently available components are:
Networks comprised of Ethernet fabrics and Fibre Channel fabrics as the flat, fast and simple foundation designed to scale to highly virtualized IT environments;
Multiprotocol fabric adapters for simplified server I/O consolidation;
High-performance application delivery products necessary for load balancing network traffic across distributed data centers;
The components on the roadmap are:
Integrated, tested and validated solution bundles of server, virtualization, networking and storage resources called Brocade Virtual Compute Blocks. An integral element of the Brocade CloudPlex architecture, Brocade will enable its systems partners and integrators to deliver Virtual Compute Block solutions comprising servers, hypervisors, storage, and cloud-optimized networking in pre-bundled, pre-racked configurations with unified support;
Powerful and universal fabric and network extension delivered through a new platform capable of supporting a number of IP, SAN and mainframe extension technologies including virtual private LAN services (VPLS), Fibre Channel over IP (FCIP) and FICON;
An advancement of Brocade Fabric ID technology called "Cloud IDs" that enables simple and secure isolation and mobility of VMs for native multi-tenancy cloud environments;
An open framework for management, provisioning and integration designed to promote multi-vendor and system-to-sytem interoperability specifically for cloud environments. These include Brocade products supporting OpenStack software for storage, compute and Software-Defined Networking (SDN) capabilities enabled through OpenFlow;
Unified education, support and services delivered through Brocade and partners to help customers manage this highly distributed "Virtual Enterprise" environment.
Brocade Partner Endorsements "We are excited to be working with Brocade to develop highly-scalable virtualized computing and storage configurations, providing superior cost-performance solutions today for our customers while at the same time establishing a clear path to cloud IT architectures in the future. Specifically, Brocade switches coupled with Dell PowerEdge servers and EqualLogic or Dell Compellent storage provide the scalability, flexibility and efficiency our customers demand in the virtual era." --Dario Zamarian, Vice President and General Manager, Dell Networking
"Fujitsu'sglobal cloud strategy is built on our real experience in working with customers on the delivery of both Services and Infrastructures for Cloud computing across the world. We believe that common processes, holistic management of infrastructure elements and the use of industry standards are fundamentally helping customers to ease the transition and to migrate their largest and most complex IT environments smoothly to join 'any mode' of the cloud consumption of their choosing. Brocade shares these views and has laid out a compelling vision through its CloudPlex architecture that Fujitsu Technology Solutions fully endorses and will support. This architecture provides compelling added value toFujitsu'sCloud offerings by defined standards and holistic management." --Jens-Peter Seick, Senior Vice President, Data Center Systems,Fujitsu
"Hitachiis helping customers deliver IT services through the cloud by using open, standards-based technologies that let them build and scale their virtualized data centers at their own pace. With Brocade's CloudPlex architecture, bothHitachiand Brocade address our mutual customers' IT needs and protect their existing IT investments by migrating their legacy devices to cloud deployments -- preventing cloud from becoming just another IT silo." --Sean Moser, Vice President, Storage Software Product Management,Hitachi Data Systems
"Recent advancements in cloud and virtualization are making it possible for enterprises to deploy an intelligent infrastructure that enables workloads to move around the enterprise and around the world in a transparent, fluid way. We believe that enabling flexible application deployment is imperative to mainstream adoption of cloud computing.VMwareand Brocade share a common vision of offering customers the ability to accelerate IT by reducing complexity while significantly lowering costs and enabling more flexible, agile services delivery." --Parag Patel, Vice President, Global Strategic Alliances,VMware
Unveiling at Brocade Technology Day Summit Brocade CTODave Stevenswill discuss more details about the CloudPlex architecture at the annual Brocade Technology Day Summit taking place on itsSan Josecampus onMay 3and 4. To participate in the event via a live webcast, please visit thefollowing pageon Brocade's Facebook page or simply register for the event at:
About Brocade Brocade (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where applications and information reside anywhere. (www.brocade.com)
(1) Source: "The Road Map From Virtualization to Cloud Computing" (Gartner,March 2011)
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron,SAN Health, ServerIron, TurboIron, and Wingspan are registered trademarks, and Brocade Assurance,Brocade NET Health, Brocade One, Extraordinary Networks, MyBrocade, VCS, and VDX are trademarks of Brocade Communications Systems, Inc., inthe United Statesand/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners.
Load Balancers Are Dead: Time to Focus on Application Delivery 2 February 2009 Mark Fabbi Gartner RAS Core Research Note G00164098 When looking at feature requirements in front of and between server tiers, too many organizations think only about load balancing. However, the era of load balancing is long past, and organizations will be better served to focus their attention on improving the delivery of applications.
Overview This research shifts the attention from basic load-balancing features to application delivery features to aid in the deployment and delivery of applications. Networking organizations are missing significant opportunities to increase application performance and user experience by ignoring this fundamental market shift.
Enterprises are still focused on load balancing.
There is little cooperation between networking and application teams on a holistic approach for application deployment.
Properly deployed application delivery controllers can improve application performance and security, increase the efficiency of data center infrastructure, and assist the deployment of the virtualized data center.
Network architects must shift attention and resources away from Layer 3 packet delivery networks and basic load balancing to application delivery networks.
Enterprises must start building specialized expertise around application delivery
What you need to Know IT organizations that shift to application delivery will improve internal application performance that will noticeably improve business processes and productivity for key applications. For external-facing applications, end-user experience and satisfaction will improve, positively affecting the ease of doing business with supply chain partners and customers. Despite application delivery technologies being well proved, they have not yet reached a level of deployment that reflects their value to the enterprise, and too many clients do not have the right business and technology requirements on their radar.
Analysis What's the Issue? Many organizations are missing out on big opportunities to improve the performance of internal processes and external service interactions by not understanding application delivery technologies. This is very obvious when considering the types of client inquiries we receive on a regular basis. In the majority of cases, clients phrase their questions to ask specifically about load balancing. In some cases, they are replacing aged server load balancers (SLBs), purchased before the advent of the advanced features now available in leading application delivery controllers (ADCs). In other cases, we get calls about application performance challenges, and, after exploring the current infrastructure, we find that these clients have modern, advanced ADCs already installed, but they haven't turned on any of the advanced features and are using new equipment, such as circa 1998 SLBs. In both cases, there is a striking lack of understanding of what ADCs can and should bring to the enterprise infrastructure. Organizations that still think of this critically important position in the data center as one that only requires load balancing are missing out on years of valuable innovation and are not taking advantage of the growing list of services that are available to increase application performance and security and to play an active role in the increasing vitalization and automation of server resources. Modern ADCs are the only devices in the data center capable of providing a real-time, pan-application view of application data flows and resource requirements. This insight will continue to drive innovation of new capabilities for distributed and vitalized applications.
Why Did This Happen? The "blame" for this misunderstanding can be distributed in many ways, though it is largely history that is at fault. SLBs were created to better solve the networking problem of how to distribute requests across a group of servers responsible for delivering a specific Web application. Initially, this was done with simple round-robin DNS, but because of the limitations of this approach, function-specific load-balancing appliances appeared on the market to examine inbound application requests and to map these requests dynamically to available servers. Because this was a networking function, the responsibility landed solely in network operations and, while there were always smaller innovative players, the bulk of the early market ended up in the hands of networking vendors (largely Cisco, Nortel and Foundry [now part of Brocade]). So, a decade ago, the situation basically consisted of networking vendors selling network solutions to network staff. However, innovation continued, and the ADC market became one of the most innovative areas of enterprise networking over the past decade. Initially, this innovation focused on the inbound problem — such as the dynamic recognition of server load or failure and session persistence to ensure that online "shopping carts" weren't lost. Soon, the market started to evolve to look at other problems, such as application and server efficiency. The best example would be the adoption of SSL termination and offload. Finally, the attention turned to outbound traffic, and a series of techniques and features started appearing in the market to improve the performance of the applications being delivered across the network. Innovations migrated from a pure networking focus to infrastructure efficiencies to application performance optimization and security — from a networking product to one that touched networking, server, applications and security staff. The networking vendors that were big players when SLB was the focus, quickly became laggards in this newly emerging ADC market.
Current Obstacles As the market shifts toward modern ADCs, some of the blame must rest on the shoulders of the new leaders (vendors such as F5 and Citrix NetScaler). While their products have many advanced capabilities, these vendors often undersell their products and don't do enough to clearly demonstrate their leadership and vision to sway more of the market to adopting the new features. The other challenge for vendors (and users) is that modern ADCs impact many parts of the IT organization. Finally, some blame rests with the IT organization. By maintaining siloed operational functions, it has been difficult to recognize and define requirements that fall between functional areas.
Why We Need More and Why Should Enterprises Care? Not all new technologies deserve consideration for mainstream deployment. However, in this case, advanced ADCs provide capabilities to help mitigate the challenges of deploying and delivering the complex application environments of today. The past decade saw a mass migration to browser-based enterprise applications targeting business processes and user productivity as well as increasing adoption of service-oriented architectures (SOAs), Web 2.0 and now cloud computing models. These approaches tend to place increased demand on the infrastructure, because of "chatty" and complex protocols. Without providing features to mitigate latency, to reduce round trips and bandwidth, and to strengthen security, these approaches almost always lead to disappointing performance for enterprise and external users. The modern ADC provides a range of features (see Note 1) to deal with these complex environments. Beyond application performance and security, application delivery controllers can reduce the number of required servers, provide real-time control mechanisms to assist in data center virtualization, and reduce data center power and cooling requirements. ADCs also provide simplified deployment and extensibility and are now being deployed between the Web server tier and the application or services tier (for SOA) servers. Most ADCs incorporate rule-based extensibility that enables customization of the behavior of the ADC. For example, a rule might enable the ADC to examine the response portion of an e-commerce transaction to strip off all but the last four digits of credit card numbers. Organizations can use these capabilities as a simple, quick alternative to modifying Web applications. Most ADCs incorporate a programmatic interface (open APIs) that allows them to be controlled by external systems, including application servers, data center management, and provisioning applications and network/system management applications. This capability may be used for regular periodic reconfigurations (end-of-month closing) or may even be driven by external events (taking an instance of an application offline for maintenance). In some cases, the application programming interfaces link the ADC to server virtualization systems and data center provisioning frameworks in order to deliver the promise of real-time infrastructure. What Vendors Provide ADC Solutions Today? During the past five years, the innovations have largely segmented the market into vendors that understand complex application environments and the subtleties in implementations (examples would be F5, Citrix NetScaler and Radware) and those with more of a focus on static feature sets and networking. "Magic Quadrant for Application Delivery Controllers" provides a more complete analysis and view of the vendors in the market. Vendors that have more-attractive offerings will have most or all of these attributes:
A strong set of advanced platform capabilities
Customizable, extensible platforms and solutions
A vision focused on application delivery networking
Affinity to applications:
Needs to be application-fluent (that is, they need to "speak the language")
Support organizations need to "talk applications"
*What Should Enterprises Do About This?
Enterprises must start to move beyond refreshing their load-balancing footprint. The features of advanced ADCs are so compelling for those that make an effort to shift their thinking and organizational boundaries that continuing efforts on SLBs is wasting time and resources. In most cases, the incremental investment in advanced ADC platforms is easily compensated by reduced requirements for servers and bandwidth and the clear improvements in end-user experience and productivity. In addition, enterprises should:
Use the approach documented in "Five Dimensions of Network Design to Improve Performance and Save Money" to understand user demographics and productivity tools and applications. Also, part of this requirements phase should entail gaining an understanding of any shifts in application architectures and strategies. This approach provides the networking team with much greater insight into broader IT requirements.
Understand what they already have in their installed base. We find, in at least 25% of our interactions, enterprises have already purchased and installed an advanced ADC platform, but are not using it to its potential. In some cases, they already have the software installed, so two to three days of training and some internal discussions can lead to massive improvements.
Start building application delivery expertise. This skill set will be one that bridges the gaps between networking, applications, security and possibly the server. Organizations can use this function to help extend the career path and interest for high-performance individuals from groups like application performance monitoring or networking operations. Networking staff aspiring to this role must have strong application and personal communication skills to achieve the correct balance. Some organizations will find they have the genesis of these skills scattered across multiple groups. Building a cohesive home will provide immediate benefits, because the organization's barriers will be quickly eliminated.
Start thinking about ADCs as strategic platforms, and move beyond tactical deployments of SLBs. Once organizations think about application delivery as a basic infrastructure asset, the use of these tools and services (and associated benefits) will be more readily achieved.
Note: We have defined a category of advanced ADCs to distinguish their capabilities from basic, more-static function load balancers. These advanced ADCs operate on a per-transaction basis and achieve application fluency. These devices become actively involved in the delivery of the application and provide sophisticated capabilities, including:
Application layer proxy, which is often bidirectional
Optimization of SAP Infrastructure to result in better performance, low costs and high energy efficiency
20 Apr 2011:
Today IBM (NYSE: IBM)
announced that Audi selected IBM to build a cloud environment for
Audi's SAP infrastructure to deliver higher performance, fast and
flexible provisioning of SAP applications and capacities, lower
infrastructure costs, and to deliver above-average energy efficiency
with the ability to enlarge future SAP applications to an almost
Audi was facing challenges to scale its IT systems by the
increased use of business-critical applications in areas such as
production and logistics, supplier relationship management and human
resources which challenged their IT infrastructure regarding reliability
In April 2010, Audi signed a contract with IBM to rebuild their
existing SAP infrastructure, including consolidation and virtualization
of the server hardware, process standardization, opportunities for
performance-related billing and a much higher operational flexibility.
Audi's new SAP Infrastructure solution is based on a new generation of
high-performance IBM POWER 7 Servers and IBM database technology (DB2).
"Along with a very high level of reliability and failure safety, the
new SAP Infrastructure solution, which we will migrate into a private
cloud, substantially lowering energy consumption," said Audi's Lorenz
Schoberl, head of IT Infrastructure Services. "The DB2 solution's
built-in data compression capability will enable us to save time and
reduce costs of storage and archiving."
"We were able to demonstrate that our combination of POWER servers
and DB2 will decrease the total cost of ownership over the next four
years -- from a business and technology point of view," said Gunter
Frohlich, IBM Client Manager for Audi.
The new infrastructure is fully operational and will be managed by
IBM in a private cloud environment hosted in Audi's data center.
About IBM Cloud Computing
IBM has helped thousands of clients adopt cloud models and manages
millions of cloud based transactions every day. IBM assists clients in
areas as diverse as banking, communications, healthcare and government
to build their own clouds or securely tap into IBM cloud-based business
and infrastructure services. IBM is unique in bringing together key
cloud technologies, deep process knowledge, a broad portfolio of cloud
solutions, and a network of global delivery centers. For more
information about IBM cloud solutions, visit www.ibm.com/smartcloud
SAN JOSE, CA -- (MARKET WIRE) -- 03/22/11 -- Brocade® (NASDAQ: BRCD) today announced it is taking a leadership position to help define standards to enable scalability and manageability in hyper-scale cloud infrastructures. Brocade has become an initial member of the Open Networking Foundation (ONF), a non-profit organization dedicated to promoting a new approach to networking called Software-Defined Networking (SDN).
SDN involves several components, one of the most important being standard-based OpenFlow, an emerging standard delivering service providers granular control of their network infrastructures. Brocade will leverage its work in developing OpenFlow across its high-performance service provider portfolio to enable customers to build high-value applications across their networks with greater efficiency and unparalleled simplicity.
Today's service providers and network operators face a number of challenges that require multiple solutions in order to ensure highly efficient and profitable operation. Brocade's goal in working with the Open Networking Foundation is to alleviate the burden of operational complexity for service providers by leveraging OpenFlow to manage and operate their networks.
Brocade has developed an OpenFlow enabled IP/MPLS router as part of its service provider product portfolio for application verification and interoperability testing with its partners and customers. Brocade plans to make additional OpenFlow strategy and product announcements later this year. Brocade will initially focus its efforts on delivering solutions that enable the scalability and manageability required in hyper-scale cloud infrastructures.
"Stronger definition of network behavior in software is a growing trend, and open interfaces are going to lead to faster innovation," said Nick McKeown, ONF board member and professor at Stanford University.
"In June 2010, Brocade was one of the first major networking vendors to publicly endorse OpenFlow," said Ken Cheng, vice president, Service Provider Products, Brocade. "Our goal is to leverage OpenFlow to build compelling cloud networking solutions for service providers and network operators worldwide, while lowering the cost associated with operating their networks."
Social Media Tags: Brocade, OpenFlow, NetIron, Storage Area Networks, SAN, IP, Fibre Channel, Ethernet, WAN, LAN, Networks, Switch, Router
About Brocade Brocade® (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where applications and information reside anywhere. (www.brocade.com)
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron, SAN Health, ServerIron, TurboIron, and Wingspan are registered trademarks, and Brocade Assurance, Brocade NET Health, Brocade One, Extraordinary Networks, MyBrocade, and VCS are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners.
Brocade CTO Named to TechAmerica CLOUD(2) Commission
Commission to Provide Recommendations on Deployment of Cloud Technologies to the United States Federal Government
SAN JOSE, CA -- (MARKET WIRE) -- 04/15/11 -- Brocade (NASDAQ: BRCD) today announced that Dave Stevens, the company's chief technology officer (CTO) has been named a Commissioner on the TechAmerica Foundation's "Leadership Opportunity in U.S. Deployment of the Cloud," known also as CLOUD(2).
The commission's mandate is to deliver recommendations to the U.S. government on ways it can effectively deploy cloud technologies and set specific public policies that will help drive further cloud innovation in both the private and public sectors.
Brocade has direct and highly relevant experience in the challenges and opportunities that the CLOUD(2) Commission is addressing by the virtue of its 15 years of experience in building mission-critical data center networks for some of the most demanding IT environments in the world. This experience and expertise has positioned Brocade to address the challenges on moving to more agile, flexible cloud IT models.
The Brocade approach, as defined by its Brocade One™ strategy, is to help its customers migrate smoothly from current networking architectures to a world where information and applications reside and can be accessed anywhere through open, multivendor cloud technologies.
"Brocade is an established leader in building and deploying fabric-based data center architectures, and customers continue to trust their networks to Brocade as they move to highly virtualized and cloud models," said Dave Stevens, chief technology officer at Brocade. "I am honored to serve as a commissioner for CLOUD(2), and I Iook forward to the opportunity to leverage our experience in this space and to play a key role in advancing the deployment of cloud architectures."
The commission will make recommendations for how government should deploy cloud technologies and address policies that might hinder U.S. leadership of the cloud in the commercial space. Recommendations for government deployment will be presented to Federal Chief Information Officer Vivek Kundra. Commercial-facing recommendations will be shared with Commerce Secretary Gary Locke and Commerce Under Secretary Pat Gallagher.
"The Obama Administration has demonstrated a clear understanding of the need to adopt cloud technologies across the government enterprise," said Dallas Advisory Partners Founder, and TechAmerica Foundation Chairman, David Sanders. "CLOUD(2) represents a broad range of companies, and is well-positioned to provide diverse insight on issues critical to the cloud. These new commissioners will be essential to the continued advancement of U.S. innovation, and we look forward to providing the Administration constructive recommendations that address these critical issues."
The commission is composed of 71 experts in the field, from both the business and academic worlds. Leading the CLOUD(2) commission are co-commissioners Salesforce.com CEO and Chairman Marc Benioff and VCE Chairman and CEO Michael D. Capellas, as well as CSC North American public sector president Jim Scheaffer and Microsoft corporate VP of technology policy and strategy Dan Reed.
Also joining co-chairmen Benioff and Capellas representing academia will be John Mallery of Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory, and Michael R. Nelson, visiting professor of Internet studies in Georgetown University's Communication, Culture and Technology Program.
About Brocade Brocade (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where applications and information reside anywhere. (www.brocade.com)
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron, SAN Health, ServerIron, TurboIron, and Wingspan are registered trademarks, and Brocade Assurance, Brocade NET Health, Brocade One, Extraordinary Networks, MyBrocade, VCS, and VDX are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners
To design a good cloud management platform we need to
understand the managed environment. As we know that the workloads would include
not only stuff running on virtual infrastructure but also traditional
infrastructure. So we need to design a management platform that can support
delivery of traditional services as well as cloud services.
The advantage of using IBM reference architecture (refer
previous chapter) is that we the service management cost to a minimum and be
able to manage multiple services (IAAS, PAAS, SAAS, Traditional Services)
through a single management platform (Common Cloud Management Platform).
The design of the management platform is mainly driven by
what platforms we need to manage as well as the services we have to deliver.
The core components of the management platform are determined by the amount of
service automation expected to be provided by the platform.
The cloud management platform can be thought of like a
Service Delivery Platform as applied to Telecommunication industries. The term Service Delivery
Platform (SDP) usually refers to a set of components that provides a
services delivery architecture (such as service creation, session control &
protocols) supporting multiple delivery models of service.
The core components can be again classified into the
business support (BSS) components and the operational support (OSS) components. The
business components include ways to manage the customer, subscription, offering
& catalog, contract, order, billing, and financial aspects of the platform.
The OSS deals
with the backend aspects of fulfilling the service request. So it includes
components like service automation, provisioning, monitoring and management.
The IBM Tivoli suite of products supports addressing almost
all of the OSS
requirements as well as some of the key components in the BSS components. As an
architect, the key decisions to take are to look at the capabilities required
based on the client needs and create a platform that is extensible.This needs to be done keeping flexibility in
mind which means you have the capability to add and remove components to
support different capabilities.In an
established and mature Data
Center, it is highly
unlikely that all these components are delivered by a single vendor. That’s why
an architecture build on open standards is critical to the success of building
a good management platform.
IBM is leading the efforts for adoption of standards by
different cloud providers, consumers and tools vendors. The work being done by
IBM with Open Group and Cloud Standards Customers Council are
some examples for the same.
Once we have determined the functional components of our
solution we need to worry about the non-functional requirements. These include
aspects like security, availability, resiliency, performance, scalability,
capacity planning and sizing.We will
need to determine these aspects for the management platform based on the size
and heterogeneity of the managed environment. We will discuss these aspects in
the next chapter.
Teresa Takai, the Defense Department's chief information officer, says the "paramount" goal of effective security in acloud computinginfrastructure is best achieved using an internal "private" system, though she wouldn't rule out use of commercial providers.
In oral testimony at ahearingof the House Armed Services Subcommittee on Emerging Threats and Capabilities on April 6, Takai said Defense could opt for public cloud services offered by companies such as Google and Microsoft Corp.
In response to questions from Rep. James Langevin, D-R.I., Takai said, "There will be instances where we [can] use commercial cloud providers ... [if] they meet our standards." She did not specify what type of applications Defense would host on a commercial cloud.
Takai added the department plans to tap the Defense Information Systems Agency, which already is providing private cloud services to the Army and email service for 1.4 million personnel. The Army, Takai said, is "looking to move [its] apps to the cloud."
One of her key priorities is to secure the Pentagon's classified networks after masses of data were illicitly siphoned off last fall to the WikiLeaks website, said Takai, who took office last October. In her preparedtestimony,she said Defense plans to deploy a public key infrastructure-based identity credential on a hardened smart card for use on the department's Secret classified networks. It is similar to, but stronger than, the technology in the Common Access Card on unclassified networks.
Defense also plans to use a Host-Based Security System to protect classified networks, a tool that "will allow us to know who is on the network" and detect anomalous behavior, Takai told the hearing.
Key challenges and focus areas for IT include enhancing efficiency,
security, resource utilization, flexibility, and simplifying data center
management, among others. Intel works closely with leading systems and
solution providers to deliver proven reference architectures to address
IT challenges. This work is based on IT requirements—from a wide range
of end users—that address challenges in evolving to cloud and next-
generation data centers, including the evolving usage requirements of
the Open Data Center Alliance.
This lab-based experience is embodied in Intel® Cloud Builders
reference architectures. Each reference architecture provides detailed
instructions on how to install and configure a particular cloud software
solution using Intel® Xeon® processor-based servers.
Developed with ecosystem leaders, the following reference architectures relate to building a cloud, or Infrastructure as a Service (IaaS), and to enhancing and optimizing cloud infrastructure with a focus on security, efficiency, and simplifying your cloud environment.
Learn more about how to build and optimize your cloud infrastructure via reference architecture guides below. Read More>
ARMONK, N.Y. - 07 Apr 2011:IBM (NYSE:IBM) has joined more than 45 leading cloud organizations to form the new Cloud Standards Customer Council, which is managed by OMG®.Organizations including Lockheed Martin, Citigroup and North Carolina State University have already joined the Council, which will help advance cloud adoption prioritizing key interoperability issues such as management, reference architectures, hybrid cloud, as well as security and compliance.
The Council will complement vendor-led cloud standards efforts and establish a core set of client–driven requirements to ensure cloud users will have the same freedom of choice, flexibility, and openness they have with traditional IT environments. The Cloud Standards Customer Council is open to all end-user organizations and further enhances customers' abilities to offer both public and private cloud offerings through a standardized platform.
IBM is inviting all of its users to participate in the CSCC and work together in addressing the challenges faced while implementing Cloud Computing. The group will work to lower the barriers for widespread adoption of Cloud Computing by helping to prioritize key Interoperability issues such as cloud management, reference architecture, hybrid clouds, as well as security and compliance.
“To make Open Cloud successful and reflective of real business needs, IBM is asking for client feedback regarding their direction and priorities around cloud standards development,” said Angel Diaz, vice president, IBM Software Standards. “This council is designed to focus on the reality of what provides the greatest cloud computing benefits for clients. Ultimately, this effort is about how organizations can use what they have today and extend their business - using open standards - to get the greatest benefits from cloud.”
In this post, We offer our initial forecast of IT cloud services delivery across five major IT product segments.we
offer our initial forecast of IT cloud services delivery across five
major IT product segments that, in aggregate, represent almost
two-thirds of enterprise IT spending (excluding PCs). This forecast
sizes IT suppliers’ opportunity to deliver their own IT offerings to
customers via the cloud services model (”opportunity #1“, as described in our recent post Framing the Cloud Opportunity for IT Suppliers).
SAN FRANCISCO, CA,
07 Apr 2011:
IBM (NYSE: IBM) today
unveiled its next generation IBM SmartCloud, an enterprise-class, secure
cloud specifically created to meet the demands of businesses.
To accelerate the shift from experimentation, development and
assessment to full scale enterprise deployment of cloud, IBM is building
out its existing cloud portfolio with IBM SmartCloud, enterprise cloud
technologies and services offerings for private, public and hybrid
clouds based on IBM hardware, software, services and best practices.
As part of this announcement, IBM is demonstrating a next-generation,
enterprise cloud service delivery platform currently piloting with key
clients and available later this year. For the first time, enterprise
clients will be able to select key characteristics of a public, private
and hybrid cloud to match workload requirements from simple Web
infrastructure to complex business processes, along five dimensions,
· Security and isolation
· Availability and performance
· Technology platforms
· Management Support and Deployment
· Payment and Billing
The IBM SmartCloud includes a broad spectrum of secure managed
services, to run diverse workloads across multiple delivery methods both
public and private. It includes customer choice with the potential for
end-to-end management of service delivery from the server and operating
system to the application and process layer.
“The new IBM SmartCloud allows for the best of both worlds – the cost
savings and scalability of a shared cloud environment plus the
security, enterprise capabilities and support services of a private
environment,” said Erich Clementi, senior vice president, IBM Global
Technology Services. “In thousands of cloud engagements, we have
discovered that enterprise client wants a choice of cloud deployment
models that meet the requirements of their workloads and the demands of
This level of choice and control translates into capabilities
customized to your needs and priorities, whether you’re deploying a
simple web application, an ordering logistics system or a complete ERP
The new IBM cloud can enable organizations, their employees and
partners, to get what they need, as they need it – from advanced
analytics and business applications to IT infrastructure like virtual
servers and storage or access to tools for testing software code - all
deployed securely across IBM’s global network of cloud data centers.
The IBM SmartCloud has two implementation options: Enterprise and Enterprise +.
- Enterprise – Available today and
expanding on our existing Development and Test Cloud allowing customers
to expand on internal development and test efforts with reduction of
application development tasks from days to minutes via automation and
rapid provisioning with over 30% reduction in costs versus traditional
application environments. This offering is available immediately.
- Enterprise + --To be made
available later this year, Enterprise + will complement and expand on
the value of Enterprise, offering brand new capabilities provide a core
set of multi-tenant services to manage virtual server, storage, network
and security infrastructure components including managed operational
Summary: A revolution is defined as a change in the way
people think and behave that is both dramatic in nature and broad in scope. By
that definition, cloud computing is indeed a revolution. Cloud computing is
creating a fundamental change in computer architecture, software and tools
development, and of course, in the way we store, distribute and consume
information. The intent of this article is to aid you in assimilating the
reality of the revolution, so you can use it for your own profit and well
being. Learn more>
Last year’s acquisition policy pronouncements are starting to be felt
across to the U.S. Army, with upticks in cloud computing initiatives,
increasing use of fixed-price contracts and adoption of social media.
“Army IT spending will remain stable; the goal is to optimize the IT
[spending]. Optimization will be guided by computing trends,” said Gary
Winkler, Army program executive officer for enterprise information
Efforts to improve efficiency, realign spending priorities and
streamline a cumbersome acquisition process were launched during the
past year amid a tightening national budget by Defense Secretary Robert
Gates and Ashton Carter, undersecretary of defense for acquisition,
technology and logistics.
Leading the charge for the Army’s efforts to hold down spending and
become more efficient are cloud computing initiatives, mobile
technologies, data center consolidation and social collaboration,
Winkler said that mobile data traffic is on track to increase by 39
times between 2009 and 2014, and the social software market is showing
40 percent growth per year through 2013 — also contributing to getting
the Pentagon’s policies rolling further down in operations.
The Army also wants to increase use of firm fixed-price and
multiple-source contracts, as directed in Carter’s Better Buying Power
initiative, and looking to maximize broadly scoped contracts that can be
used for a variety of missions.
However, there are still plenty of challenges, and there likely will
be more to come. Winkler predicted that force reductions could still lie
ahead for DOD, citing his own experience in the 1980s when, like now,
an insourcing effort was followed by a hiring freeze — which was later
followed by layoffs.
“We can tighten our belts and squeeze a little bit [as directed by
the Pentagon] — but I think it’s going to be more than just a little
bit,” Winkler said.
Still, PEO-EIS has been involved in the development of Better Buying
Power tenets, including helping shape concepts and strategies for
improving tradecraft services, establishing common taxonomy and
reforming IT acquisition — all banner items in Carter’s 23-point
acquisition reform plan released last September. Read More>
"Provision public cloud resources or securely extend your internal
virtualized infrastructure into the public cloud with VMware and our
vCloud Powered service providers, the largest ecosystem of cloud
computing partners. Leverage secure hybrid cloud resources with
confidence while providing choice and flexibility, ensuring
interoperability and portability of workloads between cloud environments
with a VMware vCloud infrastructure built on
VMware vCloud Director, and
VMware vShield." Read More>
"Security often comes up as a big stopping point for cloud computing.
One of the ways around this is to build a private cloud – one that
remains within the corporate firewall and wholly controlled internally.
That was the approach taken by Los Alamos National Laboratory as it
seeks to create an infrastructure on demand (IOD) architecture to
simplify the rollout of new technology projects and to eliminate delays
in storage, server and network provisioning.
Anil Karmel, IT manager at Los Alamos National Lab noted four tenets that played a major role in the private cloud decision:
• green IT
• streamlined operations
• rapid scaleup/down
“As we deploy more virtual servers, we consume far less power and also
reduce electronic waste,” said Karmel. “We estimate eventual savings of
$1.3 million annually due to IOD.”
Server capacity on demand is now achievable in a few clicks. Instead of
30 days to provision a server, it now takes less than 30 minutes.
The organization is utilizing HP c7000 blade enclosures along with HP
Virtual Connect Fibre Channel/Flex 10 Ethernet. HP BL460c and BL490c
blades are used, with each blade containing multiple quad-core and
A NetApp SAN was brought in to add storage capacity. This is based on
the NetApp V Series with 2 PBs of Tier 2 SATA storage. Tier One is
provided by existing HP arrays.
The cloud itself consists of four elements: a web portal at the front
end; Microsoft SharePoint as the automation engine for cloud workflows,
and also as the integration point for functions such as chargeback;
VMware vCloud Director to manage and operate the cloud; and VMware
vShield to provide security at both the application level and at the
user device level.
“Any virtual environment has to be cost effective, so that means it has
to be simple while being aware of any and all changes in real time,”
This is especially important in the security arena. Traditional security
operates at the hardware or software layer. But the addition of a
virtualization layer, said Karmel, provides too many gray areas for such
security tools to operate effectively. Hence security itself is now
being virtualized to eliminate yet another wave of security holes
showing up in the corporate networks.
Using Infrastructure on Demand, the National Lab is creating virtual
security enclaves using vShield that prevent one desktop or client from
infecting others, and keeps virtual machines (VMs) out of harm’s way.
Rules are set indicating access rights, as well as security protocols
based on threat detection. Traditional security tools interface with
this virtual security layer to keep servers and devices more protected.
Any time a threat is detected, the offending virtual computer is sent to
a remediation area, which has no network connectivity with which to
“This all occurs automatically based on preset policy,” said Karmel. “If
a VM is moved from one host to another, the security policy given to it
moves with it.”
To prevent VM sprawl, VMs are given an expiry data. This is one year by
default, though that can be adjusted. 30 days before the due date, an
email is automatically generated asking the VM owner about renewal.
Another similar email is relayed with 10 days left and then again the
day before expiry. As soon as the VM is turned off, the user is informed
of the fact and asked if he/she wants it back on line. Even then, 29
days later, the user is told that VM is scheduled for deletion. The next
day it is deleted.
However, a backup is retained for seven years just in case. The NetApp
storage is used to create snapshots of VMs before they are retired to
tape. For now, restores are not automated. But in the next version of
Infrastructure on Demand, users will be able to restore VMs they desire
in a few clicks.
“Lifecycle management of VMs is very important,” said Karmel.
The organization has erected a chargeback structure. Cloud resources are
priced according to CPU, RAM and disk. Users can see the total cost
before submitting a request for IT resources. Following a request, the
line manger has to approve and accepts the charges to that unit.
“You have to build best practices around our workloads,” said Karmel.
Service Level Agreements (SLAs) are set at four 9’s. If some hardware
goes down and Infrastructure on Demand doesn’t meet the SLA, it doesn’t
charge for that resource for that month. In addition, uptime and
availability metrics are regularly published so users are fully
At the moment, separate network, security and virtual server teams are
being maintained to monitor the infrastructure. Over time, this may be
streamlined to one centralized unit."
of the important things to decide when you discuss Cloud Service Strategy and
Design is the consideration for a Reference Architecture. This is something that is useful to align to
as it represents the blueprint for your cloud and make the implementation risk
free.The Cloud Computing Reference
Architecture (RA) is intended to be used as a blueprint / guide for
architecting cloud implementations, driven by functional and non-functional
requirements of the respective cloud implementation. The RA defines the basic
building blocks - architectural elements and their relationships which make up
the cloud. The RA also defines the basic principles which are fundamental for
delivering & managing cloud services.
architecture is more than just a collection of technologies and products. They
consist of several architectural models and are much like a city plan.The RA defines how your cloud platform should
be constructed so that it can satisfy not you’re your current demands and but
also be extensible to support the future needs of a diverse user population. So
this blueprint should be responsive to changing business and technology
requirements and adaptable to emerging technologies. Existing “legacy” products and
technologies as well as new cloud technologies can be mapped on the AOD to show
integration points amongst the new cloud technologies and integration points
between the cloud technologies and already existing ones. By delivering best practices in a standardized,
methodical way, an RA ensures consistency and quality across development and
IBM Cloud Computing RA is structured in a modular fashion with each functional capability
(architectural elements), the user roles (that we discussed in Chapter 12) and
their corresponding interactions. The IBM CCRA is created based on several
cloud engagements and incorporates all the good practices and methods
implemented across these projects. So for an end user adopting these good
practices the risk and cost of implementation of their cloud will be low. The
CC RA is built on the ELEG ( Efficiency, Lightweightness, Economies-of-scale,
of the principles that I want to highlight here is the Genericity Principle –
That’s the capability to define and manage generically along the Lifecycle of
Cloud Services: Be generic across I/P/S/BPaaS & provide ‘exploitation’
mechanism to support various cloud services using a shared, common management
platform (“Genericity”).As we know or
discussed in the cloud delivery and deployment models (Chapter 3) there can
many models for deployment and delivery of a Cloud Services. As we know Cloud
Service can represent any type of (IT) capability which is provided by the Cloud
Service Provider to Cloud Service Consumers - Infrastructure, Platform,
Software or Business Process Services. The beauty and significance of the IBM
Cloud Computing Reference Architecture is that it can cater to any of these
service delivery and deployment models. So if you are building your private
cloud or public cloud or using cloud to deliver IAAS, PAAS or SAAS the RA
remains the same and handle all of these combinations. We have seen the
capabilities that we need (Chapter 6) for implementing a common cloud
has recently submitted
Cloud Computing Reference Architecture 2.0 (CC RA) (.doc) to the Cloud
Architecture Project of the Open Group,
a document based on “real-world input from many cloud implementations across
IBM” meant to provide guidelines for creating a cloud environment. Check
out this link
which has the interview with Heather Kreger, one of the authors of Cloud Computing
Reference Architecture as well as the details of the components that make up
the topic there is also an article that I found on syscon cloud computing
journal which is comparing the Reference Architecture of the Big Three (
IBM, HP and Microsoft)which is an
before we get into the details of the Service Implementation / Transition phase
it is important that we understand the bigger picture. The word document IBM
Cloud Computing Reference Architecture 2.0 (CC RA) (.doc) provides a great
description of this bigger picture and going into the details as required. The
architectural principles define the fundamental principles which need to be
followed when realizing a cloud across all implementation stages (architecture,
design, and implementation). This is a must read for all - development teams
implementing the cloud delivery & management capabilities as well as
practitioners implementing private clouds for customers.
-By the End of the Decade One in Four UK Power Stations are Set to
Close and UK Gas Production is Expected to be Half of Current Levels,
yet Demand for Electricity is Expected to Increase by More than 50 Per
Cent by 2050
-This Collaboration is to Create a Flexible,
Secure and Scalable Data and Communications Hub to Support the UK
Government's Smart Meter Implementation Programme and its Strategy to
Cut Emissions by 80 Per Cent by 2050
21 Mar 2011:
IBM (NYSE: IBM)
and Cable&Wireless Worldwide (LSE: CW.L), today jointly announce
their collaboration to develop a new intelligent data and communications
solution, UK Smart Energy Cloud, to support the UK's Smart Meter
Implementation Programme, which aims to rollout more than 50 million
smart meters in the UK.
UK Smart Energy Cloud has the potential to provide a complete
overview of energy usage across the country and pave the way for easier
implementation of a smart grid. The solution will utilise the extensive
experience IBM has gained from leading and implementing smart grid
programmes around the world and its proven enabling software and
middleware. The solution will be supported by C&W Worldwide's
extensive, secure next-generation network and communications integration
There has never been a more challenging time for the energy industry
with decisions being taken to protect the country's energy supply that
will have significant implications for everyone in the UK. Both smart
meters and the smart grid are significant steps on the journey to a new
energy future, potentially changing for the better the way we consume
and distribute energy.
Improve performance and scalability by optimizing IT assets based on workload to ensure the ideal elasticity of your cloud.
Enterprise quality of service (QOS) virtualization provides the best
foundation for your mission-critical applications running in the cloud.
Automated management, provisioning and optimization of your physical
and virtual cloud resources ensure optimal utilization to meet changing
Self-service portal and standardized service catalog leverage all
the features of your cloud infrastructure to enable automated delivery
of services without IT intervention.
Metering and billing features provide the capabilities to improve
cost transparency and offer more flexible pricing schemes for your cloud
The unprecedented interest and projected IT spend on cloud computing
is coming from all types of organizations, businesses and governments
that are seeking to transform the way they deliver IT services and
improve workload optimization so they can quickly respond to changing
business demands. Cloud computing can significantly reduce IT costs and
complexities while improving asset utilization, workload optimization
and service delivery.
Today’s IT Infrastructures face challenges on many levels:
Composed of silos that lead to disconnected business and IT infrastructures
Contain static islands of computing, which result in inefficiencies and underutilized assets
Struggle with rapid data growth, regulatory compliance, integrity and security
Continuous rise of IT administration costs
As a result of these challenges, organizations are demanding an IT
infrastructure and service delivery model that enables growth and
innovation. An effective cloud computing environment built with
IBM Power Systems™ cloud solutions helps organizations transform their
data centers to meet these challenges:
Delivering integrated visibility, control, and automation across all business and IT assets
Is highly optimized and scales IT up and down in line with business needs
Addresses the information challenge by delivering flexible and secure access to data and mitigating risks
Utilizes flexible delivery models, automation and virtualization to
greatly simplify IT service delivery and provide enterprise QOS
capabilities including higher application availability, improved
performance, more scalability and enterprise-class security.
Power Systems cloud solutions enable customers to build an effective
cloud computing environment, enabling organizations to reduce IT costs,
improve service delivery and enable business innovation.