With the recent exploration of cloud computing technologies, organizations are using cloud service models like infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS) along with cloud deployment models (public, private and hybrid) to deploy their applications.
There is a concept in the cloud world that is based on application characteristics: the concept of cloud-enabled and cloud-centric applications. In this blog post, Dan Boulia provides a concise explanation about the concept.
You can say that a cloud-enabled application is an application that was moved to cloud, but it was originally developed for deployment in a traditional data center. Some characteristics of the application had to be changed or customized for the cloud. On the other hand, a cloud-centric application (also known as cloud-native and cloud-ready) is an application that was developed with the cloud principles of multi-tenancy, elastic scaling and easy integration and administration in its design.
When developing an application that will be deployed in the cloud, you must keep the cloud principles in mind. They should be taken into account as part of the application. So we come to the first point: Is it better to work within an existing application or to completely redesign it? There is no exact answer because it depends. You have to evaluate the level of effort (labor, time and cost) to transform the application into cloud-enabled versus the effort to completely redesign it to a cloud-centric application.
The second point is: Will my cloud-enabled application work better than a new cloud-centric application? Here I would say no. It’s rare to find an existing traditional application that was developed with any of the cloud principles in mind. It may be possible to construct the same feel (for the user) as a cloud-centric application, but it will not function the same way internally.
Changing an existing application could be easier since you already have the skills and tools in the organization and you won’t need to learn any new technology. However, while it may be easier to change the application, in the long term it will be harder to maintain. New technologies (social media, mobile, sensors) continue to appear and it is becoming more important to integrate them. Doing this will require additional and continuous effort and may exponentially increase development and supporting costs.
Now comes the third point: What can you use to help expedite the move or redevelopment of an existing application to a cloud-centric model? Many cloud companies have development tools that can help an organization on this path. For instance, IBM has recently announced IBM Bluemix, a development platform to create cloud-centric applications. Shamim Hossain explains the capabilities in more detail in his blog post. Another option is to use IBM PureApplication System to expedite the development.
I discussed some points here that I hope can provide a better understand about an important concept in cloud computing and how to address it. Let me know your thoughts on it! Follow me at Twitter @varga_sergio to talk more about it
Cloud Computing Central
With the recent exploration of cloud computing technologies, organizations are using cloud service models like infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS) along with cloud deployment models (public, private and hybrid) to deploy their applications.
RHyman 06000032P4 Tags:  machine ibmcloud security vlan virtual private linux vpn computing os development business cloud-computing ibmontwitter support public cloud smart test and 1 Comment 9,208 Views
Have you checked out the features in the new release of the IBM Smart Business Development and Test on the IBM Cloud? Well you should. Version 1.1 provides support for Virtual Private Networks and Virtual Local Area Networks plus new premium support services are now available. I've heard from my tweeps on Twitter that the new release rocks so had to share the news with all of you in our very cool developer community.
Okay so if you want to realize faster application deployment with reduced costs, you have to check out the IBM Cloud. You virtually have no infrastructure to maintain and benefit from pay-as-you-go pricing. And, you can set up more accurate test environments in minutes versus weeks using standardized configurations. Sound irresistible?
So you ask, what does this new release really mean for me as a developer? Well here's a quick summary of what Version 1.1 has to offer:
Then you'll want to check out the web site for IBM Smart Business Development and Test on the IBM Cloud to see how you can get started - you can request a contract right from the web page.
And once you get signed up, take advantage of the IBM developerWorks cloud computing resource center to keep up with technical knowledge on cloud application and services development and deployment and tools you can use to make your life easier.
Follow me on Twitter to get the latest on technical cloud resources, events and more.
Driven by trends in the consumer internet, cloud computing is becoming the new way to consume and deliver IT services. As an IT Professional, we need to understand the different aspects of cloud to seize this opportunity to grow our career and serve our clients towards a successful adoption of cloud computing.
I’m in the process of learning several aspects of cloud - emerging trends in cloud solutions, workloads, infrastructure, technologies and modern services industry. So thought of this idea to post my learning as a series of blogs which any cloud enthusiast can benefit to understand cloud computing. When discussing a topic, instead of reinventing the wheel lets build the content with links to different articles for further reading that can provide for a deeper understanding.
The articles shall cover the entire lifecycle of a cloud project covering various aspects right from the business requirements, Architecture /Design, Implementation to Operations. The intention of this blog is provide the reader a step by step any one or more of the following broad range of topics
We will have something to learn for every week and will dedicate each week for understanding one of the above topics. So by the end of 16 weeks that we have remaining for the year, we would have learned all the steps to walk on cloud. The comments to these posts from all of the members would definitely go a long way in getting our step right and enriching the content. So C’mon everyone, lets take a walk in the clouds – step by step…
I just wanted to give everybody a quick update on the Cloud Certification, I took the pre-assessment exam, Test 000-032: Foundations of IBM Cloud Computing Architecture V1. I'm happy to say I received a passing score of 75%. The pre-assessment exam was broken down into three sections
PhilipP. 310002RC19 419 Views
News about data leaks, government surveillance and private information trading have made the front page of media outlets in recent years. Last year, a data protection scandal erupted when it was discovered that Facebook started collecting and storing data from WhatsApp users after acquiring the application in 2015. While phone numbers are not shared with Facebook, other types of information are gathered, such as the type of devices and the operating systems used to operate it on.
This is supposed to be used in analytics in order to optimize marketing efforts. However, Facebook is already plagued by rumors of them selling this information to the NSA and the FBI and as long as their information pool enlarges, privacy will likely diminish drastically for each citizen. Moreover, NSA’s extracting tool called PRISM violates the privacy of people from all over the world under the pretext of defending the state against terrorist activity. Yahoo and Google have also been accused of cooperation along with other Silicon Valley companies that work with sensitive data.
Hackers also threaten information security, as their instruments become more advanced and intrusive. Accessing banking information or webcams have become common practices and those who don’t pay attention to what links they click on might find themselves in danger.
Company owners and managers fear hacking the most, particularly since they can’t control what their employees browse. Which is why they often end up exploring the possibilities of digital surveillance. A Windows monitoring software can let a supervisor see anything from typed commands to visited URL’s and versions are available for other operating systems. And while it is acceptable to watch over workers’ activity and prevent them from engaging in any damaging actions, there are other forms of surveillance that are considered immoral, including:
Social Media Following
This has become a common practice in the workplace as colleagues start befriending each other on Facebook and following each other on Twitter and Instagram. However, this is an unfortunate thing to do from a managerial point of view. Appreciating some subordinates more on social media may cause feelings of insecurity and resentment for others. Moreover, access to a worker’s posts where they express personal convictions and ideas may cause biased decisions in the workplace in the long run. On the other hand, some employees will feel the need to limit their public sharing as work connections crowd their friend list.
Many companies have a system of employee email monitoring so that managers can be on top of any risky situation. Most of the time people are not aware of it. This can lead to very unpleasant situations, where workers use their work address for personal purposes or where they make comments via email regarding their supervisor or their company. Later on, they can face discrimination from supervisors or even lose their jobs, even though they were not informed about this intrusion. Employees should be warned previous to signing a contract that the information they exchange by company e-mail is not personal, so they can make informed decisions about the contents of their messages.
Hiring or Firing Based on Internet Searches
Another problematic practice is that of supervisors goggling employers before an interview to find social media accounts and making an impression about the candidate’s appearance, social status, and likings. About 61% of employers engage in this practice, according to a CareerBuilder Survey. Moreover, people have lost jobs because of their Instagram or Facebook posts where they exposed controversial beliefs, showed nudity or engaged in inappropriate blogging.
Over monitoring Computers and Phones
Some companies are strict when it comes to their workers’ daily productivity, others want to make sure the apps and sites the employees access are not a threat to their internal systems. However, monitoring websites and apps is one thing, but collecting GPS information from phones is highly invasive of their personal space. And while there is no explicit law forbidding it, it is a form of unjustified control that is heavily frowned upon.
All things considered, advancements in digital technology have enabled people with a complex set of tools to interact with each other, share ideas and work more efficiently. Nevertheless, it is up to each person, company, and government to decide how to best use these tools and how to establish the proper boundaries between public or private institutions and citizens.
I'll make no bones about the fact that I'm a huge fan of Cloud Foundry. It's the right play, by the right people at the right time. Despite all the attempts to dilute the message over the last eleven years, Platform as a Service (or what was originally called Framework as a Service) is about write code, write data and consume services. All the other bits from containers to the management of such are red herrings. They maybe useful subsystems but they miss the point which is the necessity for constraint.
Constraint (i.e. the limitation of choice) enables innovation and the major problem we have with building at speed is almost always duplication or yak shaving. Not only do we repeat common tasks to deploy an application but most of our code is endlessly rewritten throughout the world. How many times in your coding life have you written a method to add a new user or to extract consumer data? How many times do you think others have done the same thing? How many times are not only functions but entire applications repeated endlessly between corporate's or governments? The overwhelming majority of the stuff we write is yak shaving and I would be honestly surprised if more than 0.1% of what we write is actually unique.
Now whilst Cloud Foundry has been doing an excellent job of getting rid of some of the yak shaving, in the same way that Amazon kicked off the removal of infrastructure yak shaving - for most of us, unboxing servers, racking them and wiring up networks is a thankfully an irrelevant thing of the past - there is much more to be done. There are some future steps that I believe that Cloud Foundry needs to take and fortunately the momentum is such behind it that I'm confident of talking about them here without giving a competitor any advantage.
First, it needs to create that competitive market of Cloud Foundry providers. Fortunately this is exactly what it is helping to do. That market must also be focused on differentiation by price and quality of service and not the dreaded differentiation by feature (a surefire way to create a collective prisoner dilemma and sink a project in a utility world). This is all happening and it's glorious.
Second, it needs to increasingly leave the past ideas of infrastructure behind and by that I mean containers as well. The focus needs to be server less i.e. you write code, you write data and you consume services. Everything else needs to be buried as a subsystem. I know analysts run around going "is it using docker?" but that's because many analysts are halfwits who like to gabble on about stuff that doesn't matter. It's irrelevant. That's not the same as saying Docker is not important, it has huge potential as an invisible subsystem.
Fourth, and most importantly, it needs to tackle yak shaving at the coding level. The simplest way to do this is to provide a CPAN like repository which can include individual functions as well as entire applications (hint. Github probably isn't upto this). One of the biggest lies of object orientated design was code re-use. This never happened (or rarely did) because no communication mechanism existed to actually share code. CPAN (in the Perl world) helped (imperfectly) to solve that problem. Cloud Foundry needs exactly the same thing. When I'm writing a system, if I need a customer object, then ideally I should just be able to pull in the entire object and functions related to this from a CPAN like library because lets face it, how many times should I really have to write a postcode lookup function?
But shouldn't things like postcode lookup be provided as a service? Yes! And that's the beauty.
By monitoring a CPAN like library you can quickly discover (simply by examining meta data such as downloads, changes) as to what functions are commonly being used and have become stable. These are all candidates for standard services to be provided into Cloud Foundry and offered by the CF providers. Your CPAN environment is actually a sensing engine for future services and you can use an ILC like model to exploit this. The bigger the ecosystem is, the more powerful it will become.
I would be shocked if Amazon isn't already using Lambda and the API gateway to identify future "services" and Cloud Foundry shouldn't hesitate to press any advantage here. This process will also create a virtuous cycle as new things which people develop that are shared in the CPAN library will over time become stable, widespread and provided as services enabling other people to more quickly develop new things. This concept of sharing code and combing a collaborative effort of the entire ecosystem was a central part of the Zimki play and it's as relevant today as it was then. By the way, try doing that with containers. Hint, they are way too low level and your only hope is through constraint such as that provided in the manufacture of uni-kernels.
There is a battle here because if Cloud Foundry doesn't exploit the ecosystem and AWS plays its normal game then it could run away with the show. The danger of this seems slight at the moment (but it will grow) because of the momentum with Cloud Foundry and because of the people running the show. Get this right and we will live in a world where not only do I have portability between providers but when I come to code my novel idea for my next great something then I'll discover that 99% of the code has already been done by others. I'll mostly need to stitch all the right services and functions together and add a bit extra.
Oh, but that's not possible is it? In 2006, Tom Inssam wrote for me and released live to the web a new style of wiki (with client side preview) in under an hour using Zimki. I wrote an internet mood map and basic trading application in a couple of days. Yes, this is very possible. I know, I experienced it and this isn't 2006, this is 2016!
Cloud Foundry (with a bit of luck) might finally release the world from the endless Yak shaving we have to endure in IT. It might make the lie of object re-use finally come true. The potential of the platform space is vastly more than most suspect and almost everything, and I do mean everything will be rewritten to run on it.
I look forward to the day that most Yaks come pre-shaved. For more read....
Cloud Security – The top most concern and Opportunity
First of all, wishing all my readers a
very happy and prosperous year 2012 ahead.
Few things happened towards the end
of the year which was significant to me. IBM acquired Q1 Labs to Drive Greater Security Intelligence and created a New Security Division. I also joined this
newly formed IBM Security Systems team last quarter as a solution architect for cloud security. This is a great time to be looking at cloud security. Happy to be on this new role where I can provide solution to customers to handle their cloud security concerns and make it easy for them to adopt cloud and innovate at a faster rate than before.
In my previous
post, we discussed security as the top most concern why customers and
enterprises are not adopting cloud. As
part of year’s posts, I plan to discuss the various security issues and aspects
of cloud computing.
We will explore to understand what are the unique challenges with Cloud Security and discuss what aspects is important for each customer adoption pattern that we have seen.
We will also learn how the IBM Security Framework can be used to address the various security challenges namely
· Security governance, risk management and compliance
· People and Identity
· Data and information
· Application and process
· Network, server and endpoint
· Physical infrastructure
Looking forward to your comments and inputs in this journey of understanding the security requirements for cloud and how we can overcome this major challenge to cloud adoption using the World’s Most Comprehensive Security Portfolio – IBM Security Systems. I’ll try and elaborate the IBM Point of View on cloud security and discuss the architectural model to address the security requirements for cloud. Stay tuned and keep those comments and inputs coming.
Sreek Iyer 2000001K7N Tags:  water cloud cloud-computing water-management mullaperiayar 7,537 Views
Possible Solution for Mullaperiyar Dam Issue ?
While I’m writing this blog, the Ministers of Tamil Nadu and Kerala are having a meeting with Prime Minister to discuss the contentious issue of Mullaperiayar at length. For those who don’t know about this issue, this is about the Mullaperiayar Dam in south India. Mullaperiyar Dam is a masonry gravity dam over River Periyar and operated by the Government of Tamil Nadu based on a 999-year lease agreement. The catchment areas and river basin of River Periyar downstream include five Districts of Central Kerala, namely Idukki, Kottayam, Ernakulam, Alappuzha and Trissur with a total population of around 3.5 million.
This dam is at the centre stage again in the wake of reports that the dam is weakening due to increase in incidents of tremor in Idduki district in Kerala. Ministers from Kerala are seeking Central Government intervention in ensuring the safety of the dam. At the same time, Tamil Nadu is insisting on increasing the water level in the reservoir for enhancing water supply to the state. While Tamil Nadu wants to increase the water-level in the reservoir, Kerala has been insisting that it be reduced from the current 136 feet to 120 feet.
Currently I don’t think we have clear metrics on the exact usage of water by each state, what is right level of water to be retained by the dam, what are the risks etc. We have been relying on data that we have from the past.
However you look at it -- whether too much or not enough, the world needs a smarter way to think about water. We need to look at the subject holistically with all the other considerations as well. We use water for more than drinking. We need to make an inventory of how much water we get and how is it used – of industries, irrigation, etc. This is where I think we need smarter ways to manage the water in the best possible way that addresses both states requirements adequately.
IBM Smarter Water Management can help us think in a smarter way about water. For instance IBM is helping the Beacon Institute to do source-to-sea real-time monitoring network for New York’s Hudson and St. Lawrence Rivers as well as report on conditions and threats in real time. There are many other case studies across the globe on IBM Smarter Water Management.
Those interested in the problem and the possible solutions should
definitely read IBM’s broader outlook on Water Management as covered in the Global Innovation Outlook.
Rivers for Tomorrow is another interesting partnership between IBM and The Nature Conservancy. IBM is providing a state-of-the-art support system for a free, online application that will provide easy access to data and computer models to help watershed managers assess how land use affects water quality.
Though it's a worldwide entity, water is treated as a regional issue. I think we should try putting technology to use to solve our water problems. The solution should be more instrumented, interconnected and intelligent system that can not only take into consideration the realtime monitoring of the river but also include early warning systems to notify risks related to earth quakes etc. IBM’s Strategic Water Management Solutions include offerings to help governments, water utilities, and companies monitor and manage water more effectively. The IBM Strategic Water Information Management (SWIM) solutions platform is both an information architecture and an intelligent infrastructure that enables continuous automated sensing, monitoring, and decision support for water management operations.
And you might be wondering what has this to do with Cloud and why is this post on cloud computing Central. For these solutions and platforms to be successful it is highly important that we have energy efficient high-performance computing platforms and complex sensor, metering, and actuator networks. Such platform needs and flexible choices of having the solution on-premise as well as leverage different delivery models can only be supported through a cloud.
I think we should just leverage these solutions on the cloud to solve this issue and keep all the states and its people happy :-).
cynthyap 110000GC4C Tags:  provisioning cloud service cloud_computing management virtualization 3,620 Views
Today IBM announced new SmartCloud Foundation capabilities to help organizations realize the potential of cloud computing. Watch the replay of the IBM SmartCloud launch webcast, to learn more about how the new announcements, including IBM SmartCloud Provisioning (delivered by IBM Service Agility Accelerator for Cloud), can help customers move beyond virtualization to more advanced cloud deployments.
Chapter 19 – Tivoli Process Automation Engine
As we discussed in the previous post, it is important that the all the processes work together to bring successful automation in the cloud management platform. A process workflow automation engine is what makes this possible. In this chapter we will discuss more about Tivoli process automation engine that’s form the base for IBM process automation in the cloud space.
process automation engine provides a user interface, configuration services, workflows and the common data system needed
for IBM Service Management products and other services. As we already know IBM
Service Management (ISM) is a comprehensive and integrated approach for
Service Management, integrating technology, information, processes, and people
to deliver service excellence and operational efficiency and effectiveness for
traditional enterprises, service providers, and mid-size companies. Tivoli process automation engine, previously known as Tivoli base services, provides
the base infrastructure for applications like Tivoli Maximo Asset Management,
Change and Configuration Manager Database (CCMDB), Tivoli Service Request
Manager (SRM), Tivoli Asset Management for IT (TAMIT), Tivoli Proivisioning
Manager as well as Tivoli Service Automation Manager. Any product that has the Tivoli process automation engine as its foundation can be
installed with any other product that has the Tivoli process automation engine.
IBM Service Management (ISM) comprises of
Through having a common process automation engine, the we can successfully link Operational and Business services with Infrastructure through a single (J2EE) platform. We can also leverage current investments through linking this engine with existing process automation technologies and products. So by building a unified platform to automate processes, we have taken data integration to the next level where sharing data between applications has never been easier. This integrated process automation platform can support the repeatable IT functions like Incident Management, Problem Management, Change Management, Configuration Management all the way through to Release Management. All of these processes tie into the CMDB where they share consistent data via bidirectional integration. The platform supports best practices such as ITIL and other Industry best practices. This facilitates an automated approach across the IT management lifecycle. It's also forms the basis for automating repetitive tasks that can be handled by the system instead of requiring human (costly) intervention. TPAE through the adapters provide data federation from multiple sources that you already have and translating the information into usable data that can be leveraged by internal process and workflow.
Figure 1 Tivoli process automation integrated portfolio
The Tivoli Process Automation Engine Wiki provides details on each of the components and capabilities that make up this integrated portfolio.
The Certification Study Guide Series : Foundations of Tivoli Process Automation Engine is a IBM® Redbooks publication that can guide you to get an IBM Professional Certification on Tivoli Process Automation Engine.
How do I size my cloud?
A cloud is not a cloud if it is not elastic. The elastic property of the cloud to expand and shrink based on demand is possible only with a proper capacity planning. I feel the most difficult exercise to do while making a cloud solution is capacity planning for your cloud. By this, I mean you have to size
Most of the engagements that I’ve walked into might have some capacity or infrastructure that they want us to leverage and use it in the cloud. So the comparison becomes difficult if you don’t have a standard measuring unit for your infrastructure – for instance how do you know a Quadcore on an intel platform compares to power7 core. So I found a good explanation in this guide, in this interesting article –
The answer to the difficult question was to use something called the cloud CPU unit which is nothing but the computing power equal to the processing power on a one gigahertz CPU. When a user requests two CPUs, for example, they will get the processing power of two 1 GHz CPUs. This means that a system with two CPUs, each with four cores, running at 3 GHz will have the equivalent of 24 CPU units (2CPUs x 4Cores x 3GHz = 24CPU Units).
The other dimension of the complexity is to determine the resource needs and do the trends and forecasting. I typically collect the projections from the clients and then put down some critical assumptions to determine how big my cloud should be. Some critical questions that I typically ask
IBM infrastructure planner for cloud made life easy for me that had a user friendly interface to take me through these steps and arrive at a sizing for the managed environment. Once we know the managed environment, we can make the sizing of the management platform. The details of how to plan the managed environment, I’ll discuss in my next post.
I’ll be interested in putting together the top 10 parameters
that are critical for sizing the cloud managed and management environment. Look forward to your comments.
Chapter 13 - Cloud Computing Reference Architecture
One of the important things to decide when you discuss Cloud Service Strategy and Design is the consideration for a Reference Architecture. This is something that is useful to align to as it represents the blueprint for your cloud and make the implementation risk free. The Cloud Computing Reference Architecture (RA) is intended to be used as a blueprint / guide for architecting cloud implementations, driven by functional and non-functional requirements of the respective cloud implementation. The RA defines the basic building blocks - architectural elements and their relationships which make up the cloud. The RA also defines the basic principles which are fundamental for delivering & managing cloud services.
The reference architecture is more than just a collection of technologies and products. They consist of several architectural models and are much like a city plan. The RA defines how your cloud platform should be constructed so that it can satisfy not you’re your current demands and but also be extensible to support the future needs of a diverse user population. So this blueprint should be responsive to changing business and technology requirements and adaptable to emerging technologies. Existing “legacy” products and technologies as well as new cloud technologies can be mapped on the AOD to show integration points amongst the new cloud technologies and integration points between the cloud technologies and already existing ones. By delivering best practices in a standardized, methodical way, an RA ensures consistency and quality across development and delivery projects.
The IBM Cloud Computing RA is structured in a modular fashion with each functional capability (architectural elements), the user roles (that we discussed in Chapter 12) and their corresponding interactions. The IBM CCRA is created based on several cloud engagements and incorporates all the good practices and methods implemented across these projects. So for an end user adopting these good practices the risk and cost of implementation of their cloud will be low. The CC RA is built on the ELEG ( Efficiency, Lightweightness, Economies-of-scale, Genericity) principles.
One of the principles that I want to highlight here is the Genericity Principle – That’s the capability to define and manage generically along the Lifecycle of Cloud Services: Be generic across I/P/S/BPaaS & provide ‘exploitation’ mechanism to support various cloud services using a shared, common management platform (“Genericity”). As we know or discussed in the cloud delivery and deployment models (Chapter 3) there can many models for deployment and delivery of a Cloud Services. As we know Cloud Service can represent any type of (IT) capability which is provided by the Cloud Service Provider to Cloud Service Consumers - Infrastructure, Platform, Software or Business Process Services. The beauty and significance of the IBM Cloud Computing Reference Architecture is that it can cater to any of these service delivery and deployment models. So if you are building your private cloud or public cloud or using cloud to deliver IAAS, PAAS or SAAS the RA remains the same and handle all of these combinations. We have seen the capabilities that we need (Chapter 6) for implementing a common cloud management platform.
IBM has recently submitted the IBM Cloud Computing Reference Architecture 2.0 (CC RA) (.doc) to the Cloud Architecture Project of the Open Group, a document based on “real-world input from many cloud implementations across IBM” meant to provide guidelines for creating a cloud environment. Check out this link which has the interview with Heather Kreger, one of the authors of Cloud Computing Reference Architecture as well as the details of the components that make up the CCRA.
On the topic there is also an article that I found on syscon cloud computing journal which is comparing the Reference Architecture of the Big Three ( IBM, HP and Microsoft) which is an interesting read.
before we get into the details of the Service Implementation / Transition phase
it is important that we understand the bigger picture. The word document IBM
Cloud Computing Reference Architecture 2.0 (CC RA) (.doc) provides a great
description of this bigger picture and going into the details as required. The
architectural principles define the fundamental principles which need to be
followed when realizing a cloud across all implementation stages (architecture,
design, and implementation). This is a must read for all - development teams
implementing the cloud delivery & management capabilities as well as
practitioners implementing private clouds for customers.
Chapter 6 - Multiple Entry Points to Deploy and manage Cloud Based Services
Cloud Service Management capabilities are needed to enable visibility, control and automation of cloud services. IBM provides the following open standards based integrated capabilities to implement service management for the cloud.
If you are looking for A la carte software offering/solution for maximum flexibility, you start with IBM Tivoli Service Automation Manager. This flexible solution supports user driven service requests and automated resource deployment. The key capabilities
IBM Service Delivery Manager (ISDM) is a new offering which is pre-configured management solution optimized for managing virtual environments and cloud deployments. Like Tivoli Service Automation Manager this again is also“software only” offering. In addition to the IBM Tivoli Service Automation Manager features ISDM includes the additional capabilities
IBM CloudBurst compared to Tivoli Service Automation Manager and ISDM not only has the software solution optimized for cloud but also ships the integrated hardware. In addition to what was provided by its sibling offerings, IBM Cloudburst provides the following capabilities.
Thus the three offerings are designed for specific purposes and selecting the right solution is based on the requirement. You can pick from the following list and depending on what all you need, it is easy to select the solution that meets those requirements.
Quite often people are interested to know about IBM WebSphere CloudBurst and how it is different from the three discussed above. While IBM CloudBurst and WebSphere CloudBurst are both appliances that accelerate time-to-value and reduce costs they are designed for two distinct purposes.
Their integration augments the value of each offering with IBM CloudBurst enabling end-to-end service request governance for WebSphere CloudBurst provisioning and users still able to leverage a single portal for cloud service requests for rapid and optimized provisioning of virtualized WebSphere systems
Chapter 5 Service Management for the Cloud
IT Service Management is the integrated management of the people, processes, technologies and information required to ensure the cost and quality of IT services valued by the customer. IT Service Management (ITSM) is the design, creation, implementation, execution and ongoing management of the IT environment and services that meet the needs of the business and consumers. It includes:
· Management of IT as a business
· Design, implementation, and deployment of IT services
· Delivery of services to IT customers at agreed-to levels of service and price points
· Optimization of services through Service Lifecycle Management & Continual Service Improvement
Service Management is at the heart of the Cloud. Research shows on an average, 81% of cloud payback is driven by labor savings enabled by service management. As discussed in the previous chapter, Cloud Computing provides IT departments of enterprises an opportunity to move towards a service driven management model. The same engineering discipline that rationalized factory floors and production can be applied to IT services. Cloud computing provides technical foundations enabling reengineering of IT service model. But the goals for service management remains the same the way it is applied for traditional IT. The key objective of the service management system is to provide the visibility, control and automation needed for efficient cloud delivery in both public and private implementations.
ITIL is one of the foundations for service management best practices. A key element of ITIL is the service lifecycle and the need for best practice processes throughout the life of a service. ITIL Service Lifecycle Modules are:
Cloud services also have a lifecycle that maps to the ITIL service management lifecycle. In the Cloud context, Service Management controls an efficient implementation of new services, integration with the existing portfolio and lifecycle management of standardized IT services. For instance Cloud Computing will become a relevant topic in your Service Strategy. You need to see how to leverage integration of Cloud and traditional IT services during the Service Design. For Service Operation you need a automated way to deploy your cloud services – an automated provisioning and image management. For Continual Service Improvement (CSI) it requires the capability for managing, monitoring, securing and metering your cloud services.
IBM Service Delivery Manager, IBM Tivoli Service Automation Manager and IBM Cloudburst provides open standards based integrated capabilities to implement service management for the cloud. This solution has first class integration of existing Tivoli capabilities and additional new capabilities, workflows, and best practices packaged together as a single solution.
When discussing IT Service Lifecycle management it is good to discuss the standardization step as well. Standardization helps improve overall operations. The more you can standardize the more you can reduce operating expense such as labor and downtime – which is the fastest growing portion of IT expenditures. Tivoli Service Automation manager takes care of Standardization and best practices in all the steps of Service Lifecycle with the capabilities discussed below.
These capabilities of providing visibility, control and automation across the business and IT infrastructure results in the following key benefits
We will discuss in detail how you could use IBM Cloudburst, IBM Service Delivery Manager and Tivoli Service Automation Manager for each of these steps in the lifecycle. If you are developer, the following chapters will help you understand the technologies and skills needed to do the services design, automation and management.
Chapter 3 – Cloud Deployment and Delivery Models
For the enterprises, the most attractive factor of cloud is its flexible sourcing options and the choices of deployment. And again the different deployment and delivery models can co-exist and it is possible to integrate with traditional IT systems and with other clouds.
Cloud Delivery Models
Private Cloud refers to IT capabilities are provided “as a service,” over an intranet, within the enterprise and behind the firewall. Privately owned and managed. The access limited to client and its partner network. The Private cloud drives efficiency, standardization and best practices while retaining greater customization and control within the organization. In a private cloud environment, all resources are local and dedicated. All cloud management is local.
Figure 1 Private Cloud
Public Cloud refers to IT activities / functions are provided “as a service,” over the Internet Service provider owned and managed. In public cloud, access is by subscription.
The public cloud delivers select set of standardized business process, application and/or infrastructure services on a flexible price per use basis. Multiple tenancy is a key characteristic of public cloud services.
Figure 2 Public Cloud
Hybrid cloud is a combination of characteristics of both public and private cloud where internal and external service delivery methods are integrated. For example in the case of an Off-Premise Private Cloud, resources are dedicated, but off-premise. Enterprise administrator can manage the service catalog and policies. Cloud provider operates and manages the cloud infrastructure and resource pool.
Figure 3 Off-Premise Private Cloud
Community cloud – This is the model where the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on premise or off premise.
Public vs. Private trade-off considerations
Overall private clouds have higher levels of consideration than public clouds with most of the enterprises but there are various other models that are emerging.
Figure 4 Cloud Delivery Models
We need to balance the business benefits of increased speed and lower cost with public cloud offerings versus the security and ownership of infrastructure and service management considerations while choosing between a public and private cloud offering for a capability. The governance model, resiliency, level and source of support, architectural & management control, compliance, customization / specialization etc are other considerations.
Public and Private Clouds are preferred for different workloads. Many enterprises still prefer to host their traditional applications out of their private cloud. The top private workloads include
As and when a workload becomes more standard and the SLAs are well established, the same service becomes easy to consume over a public cloud. This is similar to how you can access well defined banking functions through ATMs. Only when you need some special services you go to your bank these days. Similarly top public workloads include
Cloud Deployment Models
All the computing related functions that clouds provide are accessed through a service catalog and delivered as integrated services. The different layers of IT-as-a-Service are referred to as the Cloud Deployment Models. More details of these definitions can be found at the NIST website which is source for some of the text below.
Figure 5 Cloud Deployment Models
Infrastructure as a Service (IaaS) is the service delivery model where customers use processing (server), storage, networks and other computing resources/ data center functionality. Iaas has the ability to rapidly and elastically provision and control resources. In this model customers can deploy and run software and services without the need to manage or control the underlying resources. The IBM Research Compute Cloud (RC2) is an example for this model. Smart Business Desktop on the IBM Cloud is another example for IaaS that enables desktop virtualization with a subscription service with no upfront fees or capital expense. Consider reading about IBM Cloudburst if you are building your own IaaS platform.
Platform as a Service (PaaS) is the delivery model where customers can use programming languages, tools and platforms to develop and deploy applications on multi-tenant shared infrastructure with ability the to control deployed applications and environments. All of these again can be done without the need to manage or control the underlying resources. IBM BPM BlueWorks provides tools to build your own business process. WebSphere Cloudburst is also something for you to look at if you building a PaaS yourself.
Software as a Service (SaaS) is the popular model where customers use applications (Eg, CRM, ERP, E-mail) from multiple client devices through a Web browser on multi-tenant and shared infrastructure without the need to manage or control the underlying resources. An example of this model is IBM lotuslive.
Business Process as a Service (BPaaS) is an emerging model where customers can consume business outcomes (Eg, payroll processing, HR) by accessing business services via Web-centric interfaces on multi-tenant and shared infrastructures. Smart Business Expense Reporting on the IBM Cloud is one of the offerings in this category.
As part of the first two parts of this series we have tried to define the term “cloud computing”. Having understood what it is, let us now try to look at how and cloud computing is gaining importance now.
As the world is becoming more interconnected, infrastructure
needs to become dynamic to bring together business and IT. Growth of
instrumentation, interconnection and intelligence in the world is driving the
emergence of IT and business services and the requirement for service
management systems. To create such a
dynamic infrastructure, the customers (businesses) are looking for following
capabilities If you research on how the business can address or acquire
the above capabilities, cloud computing seems to be holding to the key answers
to the above considerations. An effective Cloud Computing deployment is built
on a Dynamic Infrastructure and is highly optimized to achieve more with less leveraging
virtualization, standardization and automation to free up budget for new
investment. A Consumption model: new user
experience and a business model A
Computing and Delivery model: One of the
earliest groups to take a step towards identifying some of these use cases is
Computing Use Cases Workgroup on google groups. This collaborative
effort of cloud consumers and cloud vendors has put out a white paper that
discusses some of the basic definitions. The paper further discusses the
various Use Case Scenarios from a Delivery and Deployment model perspective. The
white paper is in its fifth iteration were the group members are now discussing
what and how about “moving to the cloud”. The current version of the paper can
be found here.
The delivery model (public, private or hybrid) selection
depends on the workload. The research studies by IBM indicate that the
different types of workloads that could be delivered internal with a private
cloud or on a fully shared environment on a public cloud are the following.
As the world is becoming more interconnected, infrastructure needs to become dynamic to bring together business and IT. Growth of instrumentation, interconnection and intelligence in the world is driving the emergence of IT and business services and the requirement for service management systems. To create such a dynamic infrastructure, the customers (businesses) are looking for following capabilities
If you research on how the business can address or acquire the above capabilities, cloud computing seems to be holding to the key answers to the above considerations. An effective Cloud Computing deployment is built on a Dynamic Infrastructure and is highly optimized to achieve more with less leveraging virtualization, standardization and automation to free up budget for new investment.
A Consumption model: new user experience and a business model
A Computing and Delivery model:
One of the earliest groups to take a step towards identifying some of these use cases is the Cloud Computing Use Cases Workgroup on google groups. This collaborative effort of cloud consumers and cloud vendors has put out a white paper that discusses some of the basic definitions. The paper further discusses the various Use Case Scenarios from a Delivery and Deployment model perspective. The white paper is in its fifth iteration were the group members are now discussing what and how about “moving to the cloud”. The current version of the paper can be found here.
The delivery model (public, private or hybrid) selection depends on the workload. The research studies by IBM indicate that the different types of workloads that could be delivered internal with a private cloud or on a fully shared environment on a public cloud are the following.
Cloud Deployment and Delivery Models
There are multiple delivery and deployment models that cloud computing supports to deliver the promised capabilities. This choice and flexibility of having different deployment delivery models is the key to success of Cloud Computing platform. The cloud flexible delivery models include
Standard Cloud service types are emerging and guiding the IT Industry development. The different deployment models are
Defining Cloud Computing
Let’s start the first module with trying to understand and define the term Cloud Computing in its details. It is comprised of two words – Cloud and Computing. So simply put it is computing that you can offer on the cloud. What’s the Cloud referred here? The term "cloud" is used as a metaphor for the Internet, based on the cloud drawing used in the past to represent the network. The computing could be any goal-oriented activity requiring, benefiting from the usage of Information Technology that includes hardware and software systems used for a wide range of purposes; processing, structuring, and managing various kinds of information;
There are several definitions that you can find on the web for cloud computing.
National Institute of Standards and Technology (NIST), Information Technology Laboratory has been promoting the effective and secure use of cloud computing technology within government and industry by providing technical guidance and promoting standards.
NIST Definition - Cloud computing is a pay-per-use model for enabling available, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
Wikipedia - Cloud computing is Internet-based computing, whereby shared resources, software, and information are provided to computers and other devices on demand, like the electricity grid.
Internet-based computing was always available. So what’s different now? The different is Cloud computing is a paradigm shift. Cloud computing is a new consumption and delivery model inspired by consumer internet services. Cloud computing is still an evolving paradigm. But in general most of the companies involved with cloud have agreed on certain general characteristics or essentials that qualify any internet-based computing to be referred to as a cloud. They are the following
On-demand self-service - A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed without requiring human interaction with each service’s provider.
Ubiquitous network access - Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).
Location independent resource pooling - The provider’s computing resources are pooled to serve all consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. The customer generally has no control or knowledge over the exact location of the provided resources. Examples of resources include storage, processing, memory, network bandwidth, and virtual machines.
Rapid elasticity - Capabilities can be rapidly and elastically provisioned to quickly scale up and rapidly released to quickly scale down. To the consumer, the capabilities available for rent often appear to be infinite and can be purchased in any quantity at any time.
Pay per use - Capabilities are charged using a metered, fee-for-service, or advertising based billing model to promote optimization of resource use. Examples are measuring the storage, bandwidth, and computing resources consumed and charging for the number of active user accounts per month. Clouds within an organization accrue cost between business units and may or may not use actual currency.
The intent of this blog is not to duplicate the content from other web sites into this article. But provide a means to navigate through a variety of resources that are available and take a structured approach to understanding the term. Once we have understood this basic definition, let’s look at other resources for further reading.
The best first stop for getting started with some basis is the cloud computing zone on IBM Developerworks. There is a specific section called New to Cloud that discusses some of the frequently asked questions.
· What is Cloud Computing?
· Is Cloud Computing same as Software-as-a-Service?
· Where can I learn more about Cloud Computing?
· What types of application can run in the Cloud?
Cloud Computing Primer - Part 1 – This white paper recommended as one of the resources for the Cloud Computing Certification discusses the definition in detail. Beyond the definition, it discusses the cloud computing context and how is it different from current hosted services. Virtualization plays a key role for meeting some of the characteristics of cloud like Elasticity and Scalability, Workload Migration and Resiliency. This article discusses Virtualization and its effect on cloud is computing. The article further tries to burst some common myths about cloud computing like
To get an overview best is to start with these excellent 3 to 4 minute videos on introduction to the basics of cloud computing from common craft and rPath – Cloud Computing in Plain English and Cloud Computing Plain and Simple. Cloud Computing Explained is another simple video that explains Cloud Computing in a way that everyone can understand! You can find many videos on Youtube if you search for cloud computing. But the best that I liked is this one where a Dad is explaining Cow computing – I mean Cloud Computing to his daughter. Check it out.
Slide share is another good place where I found there are some very interesting presentations on cloud.
The best way to learn I feel is learning together. That’s the power of communities. We can collectively learn from each other. So I would suggest all of you (if not already) to join the Cloud Computing Central as well as the IBM Cloud Computing Community.
Finally what better way to discuss this topic - Enjoy this song (parody) by Loose Bruce and get the complete essence of What’s Cloud Computing.
kczap 120000E62C Tags:  cloud certifications architecture solution certified - ibm computing v1 advisor 1 Comment 10,429 Views
We had our first meeting of the IBM Cloud Certification Study Group yesterday.The objective of the study group is to pass the IBM Certified Solution Advisor Cloud Computing Architecture V1 certification exam I wanted to thank all the group members who attended and shared their ideas on how to study for the certification exam. We had groups members participate from all over the globe, from Sweden, India, North America and Australia. If you couldn't make it ,have no worries we'll arrange another meeting in a couple of weeks time. Please feel free to join us.
During our meeting we decided on a strategy of " Divide and Conquer" in our approach to studying for the exam. By this I mean, take advantage of each individuals strengths and share it with the group. One group member might be well versed on Cloud Security and another might be proficient on SaaS. The idea is to get together and share our knowledge.
During our meeting we covered the following:
I'm really looking forward to working with the study group and ultimately becoming a IBM Certified Solution Advisor too.
S.Weizenblut 50RYA8JRKT 405 Views
It’s a funny thing. Your signature uniquely identifies you as you. The last thing you’d want is to have dozens of signatures; it would make identity theft that much simpler. But when it comes to an IPS, many assume the more signatures, the better. The FortiGuard IPS service is marketed as inspecting “Over 8,000 signatures consisting of 15,649 rules.” Checkpoint notes that it delivers “1,000s of signature, behavioral and preemptive protections.” All of which begs the question, is the sheer quantity of signatures the right way to measure the effectiveness of an IPS?
Signatures: Quality Not Quantity is the Key
Traditional IPS appliances remain limited in many ways. An IPS is only effective at protecting devices on its network; it cannot protect cloud and mobile traffic. Security appliances must be carefully “cared for,” constantly requiring new signatures and software patches. The result of which is a big time-sink for IT teams. And with HTTPS traffic now the norm, SSL or TLS inspection is essential for any IPS. Yet, decrypting encrypted traffic is processing-intensive, exacting a heavy toll on IPS performance.
Against this backdrop we need to consider the value of additional threat signatures. Every signature applied by an IPS requires additional processing. As a result, IT managers find themselves in the unenviable position of weighing security protection against operational efficiencies and hardware constraints. On the one hand, they can run all IPS signatures for maximum protection, ultimately forcing an expensive hardware upgrade when the number of signatures start to exceed IPS processing capacity. On the other hand, IT managers run select IPS signatures and significantly complicate IPS deployments, requiring the evaluation and assessment of every signature’s severity, performance impact, and ability to accurately identify a threat.
The reality is that all too many threats detected by IPS signatures are either irrelevant or can be defended against more efficiently with other security systems. Our security researchers recently completed an analysis of signatures supplied for the open source Snort IPS. They found that many of its signatures block attacks against outdated applications while many other signatures identified the same attack using different IP addresses or domain names. Attackers often jump between domains and IP addresses faster than Snort updates, making many of these signatures irrelevant.
The truly accurate measure of an IPS’ effectiveness should be less about the quantity of signatures and more about the impact of those signatures. Like an Aikido master deflecting attacks with the most economical movement, the successful IPS stops the greatest number of serious threats with the least number of signatures.
How do we gain this Aikido-like efficiencies from our IPS systems? First, rather than trying to create IPS signatures for every attack, we need to start relying on other security engines to do their jobs. Security engines with reputation data are a far better way to block threats from rapidly changing domains or IPs, for example, than dozens of specific, hardwired IPS rules. The fact that this isn’t happening reflects a deep architectural flaw within traditional IPS architecture.
Even when they are packaged with other tools or share common management consoles, legacy IPS operation remains siloed. The pattern-matching language used to build IPS signatures cannot utilize information from other security modules, such as application control, antivirus, and URL categorization and reputation. The lack of interaction between security components is intentional. IPS was designed to work at wire speed. Integrating its real-time processing with other security products adds too much delay into the session. However, without the context from other security sources, IPS signatures could erroneously block good traffic flows or too few threats.
The Solution is Context-Aware Protection
The second way to improve IPS efficiencies then is to build signatures based on the symptoms of an attack and not its details. By replacing the traditional signature with context-aware signatures, the IPS can identify often-missed attacks, such as vulnerability exploits and malware command and control communication, or only detected with many false positives or negatives.
Essential to building context-aware signatures is integration with other security systems. The signature’s pattern-matching language should tap the full context of networking and security attributes associated with each session, flow and packet including:
To do this, the IPS provider needs a deep understanding of real-world traffic patterns. This can often be gained by discerning big data insights derived from monitoring the large volumes of traffic across global backbones. With that insight, the IPS provider can optimize IPS signatures for maximum effectiveness, test them on real-world data before releasing to customers. And when delivered as a service, the IPS provider can monitor and tweak those context-aware signatures for maximum effectiveness.
The Cato IPS
It’s for those reasons, Cato Networks recently introduced a context-aware Intrusion Prevention System (IPS) as part of its Cato Cloud secure SD-WAN service. Cato’s cloud-based IPS is fully converged with the rest of Cato’s security services, which include next generation firewall (NGFW), secure web gateway (SWG), URL filtering, and malware protection.
Cato IPS is also the first to be integrated with a global SD-WAN service, allowing Cato security researchers to test, modify and perfect IPS signatures using live traffic from Cato’s backbone. “With Cato IPS, our customers gain richer defense through an always current IPS, smarter signature with contextual awareness, incredible scalability that covers SSL encrypted traffic, and the insight of our world-class security research team,” says Gur Shatz, co-founder and CTO of Cato Networks.
To illustrate, the impact of context awareness consider how a traditional IPS might block suspicious IPs and URLs. Hundreds of signatures around specific domains or IP addresses is one approach, but a wasteful one. A context-aware IPS, such as Cato IPS, can block suspicious locations by leveraging geolocation restrictions:
The recent WannaCry outbreak, for example, can also be stopped with Cato IPS by detecting the EternalBlue exploit used by WannaCry:
Rich context also helps with IPS performance. Normally an IPS lacks visibility into the user’s environment and must run all signatures on traffic from all clients. Not only does this generate false positives, but wastes processing inspecting irrelevant traffic, looking for Android-based threats for example, on iPhone or Windows machines. With deeper visibility into the packet stream, the Cato IPS can be more intelligent in selecting the rules to be activated. Knowing a device is an Android means that the IPS can safely skip signatures specific to Windows devices or iPhones
With mobile devices, IoT, and the cloud our IT environments are becoming only more complex. We face an ever greater range of threats, which cannot be stopped with a traditional IPS. Context-awareness gives the IPS substantial stopping power using fewer signatures, which are better tuned to today’s rapidly shifting threat landscape.
Throughout the years, the growing needs for storage led to the invention of simpler ways to manage file storage. But this isn’t an easy task because storage users also want efficiency and control together with great service levels, all necessary things in order to keep up with an ever-changing business world. This is how file virtualization came into action.
sofiastechtalk 50TM9DSBXG 764 Views
Solenoids are helical shaped coils that are controlled by electric currents. Essentially Solenoids are electromechanical in nature, i.e. they are controlled by electricity and convert the electric signals they receive to mechanical movements. Solenoid valves make use of solenoids to control the flow of either liquids or gases.
These valves are most commonly used elements for control in fluidics.
The task of these valves can vary from shut off to release to dosage and even distributing or mixing of fluids. These are the fastest and safest switching mechanisms in valves and have the highest reliability. They are long wearing and require less power for usage and have a compact design.
How it works
Solenoid valves mainly have two parts: a solenoid and a valve. The solenoid takes electrical energy and converts it to a mechanical force that opens or closes the valve. The opening and closing of the valves are usually accomplished by a plunger type actuator for these valves.
Almost all solenoid valves function on a two mode or digital principle i.e. they are either active or switched on i.e. the coil is active with an electric current passing through it, or they are resting, inactive or switched off when there is no electric current.
Modes of operation of Solenoid valves
In general, there are two types of Solenoid valves, they are direct-operated or Pilot operated.
Direct-Operated Solenoid Valves
These valves are controlled entirely by the solenoid force.The valve has a plunger in the valve is kept in place either opened or closed with the help of the fluid pressure and a closing spring. The only thing that can move the position of the plunger is the solenoid force.
These can have two different type of functioning:
Normally closed - In this case the solenoid valve is closed when switched OFF and opens when switched ON. These valves require a continuous flow of electricity to keep the valve open.
Normally open - In this case, the solenoid valve is open when switched OFF and closes when switched ON. These valves require a continuous flow of electricity to keep the valve closed.
There are solenoid valves that are designed to switch between normally open and normally closed states. These are called as Latching or Bi-stable Valves. For these, just short electrical impulses are enough to make the solenoid valve to become either opened or closed valves reducing the electric power consumption and heating to negligible amounts.
These valves use a smaller pilot solenoid valve to control a larger valve. The second larger valve is usually activated pneumatically but the two are together referred to as a pilot- operated solenoid valve.
These valves generally require less electricity to operate but have a drawback that they need continuous and constant power to remain in the active state, whereas a direct-acting valve requires full power only to activate the valve, thereafter it requires much lower power to hold it in the active state.
The pilot operated solenoid valves are also much slower than a direct operating valve.
Types of Solenoids valves
The valve has two ports one for inflow and the other for outflow. This is the most basic solenoid valve with a normally open or normally closed state.
Three-way valves have three ports. These can have one inlet and two outlets or two outlets and one inlet depending on its function.In this type of valve one outlet stays closed while the other stays open. These are used when the operation requires alternating and exhaustive pressure, for example, a dishwasher.
This type of valves has four or five pipes or ports. These can have a single or two solenoids with a single valve operator. In this type of a valve half, the ports supply pressure, and the remaining connections provide exhaust pressure. These valves are used in dust collectors, safe field wiring, gas applications etc.
Proper research is required before buying a solenoid valve, to suit a function. It is important to take note of specifications required for the system, the type of fluid used in the operation and also the type of material that should be used for the seals.
sofiastechtalk 50TM9DSBXG 614 Views
Sweating is an involuntary phenomenon that our body skin performs in order to regulate the body temperature. Technically known as perspiration, it is the release of salt-based fluids from our sweat glands located under the skin.
While sweating is an essential natural process, excessive sweating leads to social problems and sometimes embarrassment when attending a public meeting. Hence a need for the development of an artificial process that could control excessive sweat arose which gave birth to Iontophoresis.
The process of Iontophoresis was brought into active treatment in the early 1940s. Since then, the process itself has evolved several fold time. Iontophoresis is the advanced stage for patients who are suffering from excessive sweating, scientifically known as Hyperhidrosis. With an impressive success rate, typically ranging from 65%-98.5% for eliminating excessive sweats from the targeted parts of the body, Iontophoresis is considered safe and reliable.
Working Of Iontophoresis
Iontophoresis uses purified water as a medium to conduct a mild electrical current to the sweat glands present in the skin cells. Iontophoresis is the process that works by passing charged particles all the way to the sweat glands, thereby making it dysfunctional rather than excessive sweat.
Areas That Can Be Treated With Iontophoresis
Several body parts are proven to be treated with desired results to eliminate excessive sweat by using Iontophoresis. Typically, Iontophoresis enjoys a success rate of 98.5% for feet area and around 65-75% for armpit and underarms area. Some of the other body parts that can undergo Iontophoresis include facial area, forehead including scalp area.
How Is Iontophoresis Performed?
Major factors on which the success of Iontophoresis depend includes
Utmost care is taken so that Iontophoresis is conducted in a zone where the possibility to sweat is the minimum.
Side Effects Of Iontophoresis
As mentioned earlier, Iontophoresis mainly because of its nature of the treatment is one of the safest treatments. But there are exceptions to all good things in this world. Some of the side effects of Iontophoresis include:
Patients Who Should Avoid Iontophoresis
Though Iontophoresis is a safe, reliable treatment and this has been repeatedly mentioned throughout, a certain category of patients must avoid undergoing Iontophoresis. They include Pregnant women, Patients fitted with a pacemaker, Patients with metal implants and Patients suffering from epilepsy.
FDA-Approved Devices For Iontophoresis
To the much of the patient’s convenience, FDA approved devices are in the market that can be used for Iontophoresis in the home.
A powerful package, this device has the highest success rate. More than 20000 units have been sold till date and this proves its popularity. Enjoying an FDA-approved device status, the device is the most expensive public use device for Iontophoresis.
Though the makeover of this device is dull, the unit is powerful enough to provide desired results.The device is expensive yet one of the popular FDA-approved devices in practice.
Alternative To Iontophoresis
What if Iontophoresis fails to treat your concern? There are other ways too to reduce your excessive sweat. Such ways use the additional substance to enhance/strengthen the process of Iontophoresis. Some of these enhancements include:
A very holistic view about the process of Iontophoresis is detailed in the above passages. Consider it only as a general guide and make sure to have an opinion from a specialist before undergoing Iontophoresis.
Electronic signatures are a robust method of verifying the integrity of an electronic document. This is the digital counterpart of putting your signature on the dotted line. With the majority of organizations making a shift from paperwork to electronically managed records, the concept of signing these documents electronically is a no-brainer. This need is even more compounded in cases where those who sign on the document and those who need to verify the document work remotely. It doesn’t help that the general public perception of eSignatures is poor as compared to their physical counterpart. A recently published paper in the Journal of Experimental Social Psychology looks into the trust issues people have with eSignatures. Another paper explores the indirect side-effects eSignatures have on individual honesty and integrity. Properly implemented eSignatures are in fact very secure and resistant to tampering. This is supported by the fact that online contract signing is going manstream.
There are two regulatory acts that provide the baseline for eSignature security compliance standards for various implementations around the world; these are the ESIGN Act for the US and eIDAS for the European Union. eIDAS identifies three types of eSignatures: basic, advanced (AES), qualified (QES).
This type of signature involves the signatory putting their signature mark on the document (typed or drawn) and then protecting it with a cryptographic signature. This “witness” cryptographic signature binds the signature marking to the document. Any unauthorized changes are thus not allowed on that document. This ensures that the person putting in the signature is actually the one who is supposed to sign. Making this accurate requires implementing authentication schemes that are a precursor to the document signing process. The key used to sign documents using basic eSignature scheme can either be a centralized one from a service provider or one from the organization itself.
Advanced Electronic Signatures (AES)
Advanced Electronic Signature scheme is more secure than the basic scheme. From the 1999/93/EC EU Directive:
“advanced electronic signature” means an electronic signature which meets the following requirements:
(a) it is uniquely linked to the signatory;
(b) it is capable of identifying the signatory;
(c) it is created using means that the signatory can maintain under his sole control; and
(d) it is linked to the data to which it relates in such a manner that any subsequent change of the data is detectable;”
Unlike the basic scheme where the same key was used to put a cryptographic signature on the document, AES requires that each signatory has their own unique key. The signatory’s identity must be established using the certificate provided to it by a trusted authority. The implementation should also be able to identify whether the document data has been tampered with and reject the signature in that case.
For the actual implementation, three standards are used: XAdES, PAdES and CAdES. XAdES (short for "XML Advanced Electronic Signatures") is based upon XML Signatures, which is a general purpose framework for digital signatures. CAdES (short for CMS Advanced Electronic Signatures) is based upon Cryptographic Message Syntax (CMS), which is another general purpose framework for digital signatures. PAdES (short for PDF Advanced Electronic Signatures) is one of the most popular standards. This standard defines a set of restrictions for PDF document format.
When a digital document is presented to a system, the reviewer validates that the document has not been tampered with, and makes sure that it is signed by a certificate that they trust. That certificate in turn should be trusted by another trusted certificate and so on in the chain, until we reach the trusted root - which is a certificate that is verified to be of legitimate origins and is already stored with the reviewer system. This is very similar to how web browsers validate a website’s certificate.
But what if some organization discovers that their identity has been compromised long ago and they should not trust the documents signed after that? That would require revoking the certificate. So when the reviewer is looking to verify the document integrity, they must know that the certificate they have been provided is no longer valid. This is done by using Certificate Revocation Lists (CRL), which is a list that is published periodically by the certificate issuing authority or by using OSCP, which is a protocol to obtain the certificate revocation status in real time. These methods work well but require network connectivity to work. A solution to this problem is Long-Term Validation (LTV). In an LTV scheme the required elements are embedded in the document itself, so the reviewer can verify the signature later on. One benefit of PAdES is that it supports LTV.
Qualified Electronic Signatures (QES)
QES are a more trusted version of AES. It involves a formal registration process for the signatory to verify their identity before a qualified certificate issuing authority.
eSignatures are gaining more mainstream acceptance than ever before. The combination of ease of use and security makes this technology very promising. The market is still learning to adopt it. We will see more development in this space in the coming years especially in relation to smart contracts.
urajan 270005FA1V 1,293 Views
This is just to deploy a test OpenStack instance to try it, perform demonstrations or to learn how to deploy cloud instances. In short, I decided to have a test OpenStack instance running on my old laptop and if possible have a booking page that lets users request access to this OpenStack setup via Horizon or SSH. I ended up creating this page for booking it - http://tryopenstack.cloudaccess.host, and I have a separate URL for accessing OpenStack: http://tryopenstack.dlinkddns.com (If this URL doesn’t work, it’s because I’ve shut down my laptop).
Which Operating System do you need? I setup both DevStack and RDO (RPM Distribution of OpenStack). These are two popular free OpenStack distributions. At the time of writing this, Ubuntu 16.04 is great for setting up DevStack. It is geared towards OpenStack developers having an environment to develop and test. However, after having installed the latest release of DevStack, I decided to try out RDO. RDO is deployed using Packstack which is a tool that uses Puppet, an open source configuration management tool, to deploy the various components that make up OpenStack.
As of writing this, DevStack’s latest package sets up OpenStack Pike (which can be confirmed from the nova version – 16.0.0) while the latest stable release of RDO contained OpenStack Ocata.
To setup RDO, CentOS works great, however Fedora may work as well. CentOS is a free Linux distribution based on RedHat Enterprise Linux. The version I installed was the latest minimal version of the server pulled from here: https://www.centos.org/download/. I got the “Everything ISO” version of the installer which has the option to setup the Minimal version.
It’s best to disable Firewalld and NetworkManager.RDO recommends that in their installation website: https://www.rdoproject.org/install/quickstart/
Here are some of the steps I had to perform;
sudo systemctl disable firewalld
sudo systemctl stop firewalld
sudo systemctl disable NetworkManager
sudo systemctl stop NetworkManager
sudo systemctl enable network
sudo systemctl start network
My laptop has a wireless port, and I didn’t want to physically plug the Ethernet port for convenience reasons. This presents a new problem. Packstack setup recommends that you have NetworkManager disabled on the Operating System. This means DHCP is out of the question. It’s also not recommended to have DHCP on the port while running OpenStack. I initially solved this using a specific IP address corresponding to the MAC address of my wireless port. In a way, it’s static, but it’s still using DHCP to set the same IP address to my wireless port every time. So, I started experimenting with several static IP configurations on my wireless port. Eventually, I was able to figure out that I had to authenticate to my wireless router manually.
To do this, I generated a Hex Code for my SSID and password combination. To do that, use any free WPA PSK generator. Then, I created a wpa_supplicant configuration file called “wpa_supplicant.conf” with the following information.
Then use the following command to authenticate to the wireless router.
wpa_supplicant -B -D wext -i wlp4s0 -c /etc/wpa_supplicant/wpa_supplicant.conf;dhclient wlp4s044
As you may have guessed, wlp4s0 is my wireless port. Now, I had to make this persistent on boot, so add the line above to the /etc/rc.d/rc.local file and set the right permissions for this file: chmod +x /etc/rc.d/rc.local.
At this stage, it’s best to reboot your laptop to make sure you can connect to the Internet through the wireless port.
It’s not mentioned in the website, but it’s better to disable selinux or set it to permissive. It’s also recommended to have a fully qualified domain name (FQDN). Ensure you have properly set up the /etc/hosts and the /etc/resolv.conf files.
Then perform the following self-explanatory steps;
yum install -y https://rdoproject.org/repos/rdo-release.rpm
yum install -y centos-release-openstack-ocata
yum update -y
yum install -y openstack-packstack
In my all-in-one laptop setup, contrary to what’s specified in the installation website, it’s best to generate an answer file which you can use to fine-tune what you actually set up.
This comes up with a file (called answer.txt) which contains all the parameters regarding the installation of OpenStack. I disabled certain OpenStack components like Swift (the Object Storage service) and telemetry (the metering service) as this is just a test setup on a laptop.
Then I used the following command to run the installation.
My laptop had around 300 GB of disk space, with 4 cores and 8 GB of RAM. The setup completed in about 30 minutes, and you must see a “Installation completed successfully” message.
It didn’t work the first time (Error: systemd start for rabbitmq server failed) and I asked a question on the Ask OpenStack forum. It hasn’t been answered by the time of this blog post, but I solved it by adding a search keyword with my domain name on my /etc/resolv.conf file and by adding my internal IP (wireless port’s IP) and my domain name in the /etc/hosts file. I will explain how to generate this domain name later. One thing to note is the way you know the installation didn’t succeed is by not getting the “Installation completed successfully” message. You will still be able to access the Horizon dashboard but several things that have to do with the RabbitMQ server will not work.
Once the installation is done, OpenStack sets up several virtual bridges, one for the internal network (your private home network) , one for the external network (the public network) and one as a tunnel. Here’s a screenshot to better explain this;
There are a couple of ways to handle this. The first one, the recommended I’d assume, is to setup the “br-ex” interface to have the IP address that was in the “wlp4s0” interface, and the make the wlp4s0 port, a port in the OVS bridge. If this configuration is needed for someone, I’ll post it here – but for the sake of brevity let me ignore this for now. In short, the wireless interface configuration would change to something like this;
The second way, is to use the default network configured by OpenStack on the external bridge – 172.24.4.0/24 as shown in the screenshot above. This network, as it’s a bridge, can still route traffic via the wireless port to the Internet by default, and the instances that you may deploy will have connectivity to the Internet – sort of like a tunnel (I have proof of this below). However, you would not have the ability to ping the deployed instances from your local wireless network laptops.
Once the installation is done, you can simply point your browser to your static IP address that youconfigured for the laptop, and you should be able to see the OpenStack dashboard. One of the good things I was surprised by, was a “material” theme for the OpenStack interface – inspired by Google’s material design - if you’re familiar with Android development.
I had a Cirros instance stuck in the “queued” state, and I was able to get rid of it (only) from the CLI, using the following command;
glance image-delete <image_ID>. This may have been because of the first failed message I received, as mentioned above.
The rest of the setup is very important. Next, you setup Neutron. I briefly mentioned which internal and external network I setup above. I can provide more details if anyone needs this. Then setup a virtual router to bridge the internal and the external networks using one gateway address. This is a screenshot of the topology from the “second way” I mentioned above.
Then, it’s best to setup your key pairs. You have to copy your public key on the Key Pairs section on Horizon. Most cloud instances need key pairs for authentication and your regular username/password method of authentication will not work.
Then, setup an image – several cloud-init enabled images are available on the Internet, that you can either download and upload to Horizon, or directly import via a URL. I decided to stick with the Cirros image – just for initial testing.
Horizon in the Ocata release has revamped the instance launching screen. It looks much better and is intuitive.
To cut the long story short, I deployed my first instance and it was successful. I also associated a floating IP on the test 172.24.4.0 network. The wireless port acted as the gateway for the internal network – 192.168.0.0 network.
What’s really surprising for a first-time viewer was how it was able to ping google.com in spite of not having the instance registered on my wireless router.
This step is quite important, and should be at the top of this page, as this was one of the first things I did. My cheap wireless router had a feature of creating DDNS addresses associated with the public IP (which may change on router reboot) provided by my ISP. So, I generated a URL for my public IP address and opened up the HTTP port (80) on the router. If this URL doesn’t work, it’s because I’ve shut down my laptop.
However I wanted an always-on URL to enable people to “book” my OpenStack instance. This URL is here: http://tryopenstack.cloudaccess.host/
The idea is if you send me a request via this URL (http://tryopenstack.cloudaccess.host/) I’d be able to open my laptop, create a project, and user for you and let you access the resources. I also setup a booking page to charge for resources via Paypal.
Here’s a diagram I came up in a short time to explain the whole thing to a layman:
I wrote the whole thing in about 30 minutes, and I apologize for grammatical errors.
Things to try in the future:
FrankGothmann 502T67RDT0 705 Views
The use of business intelligence and analytics to make decisions is on the rise in corporate America. According to a Gartner report, the the BI and analytics market is expected to grow to $20 billion by 2019. We are already at a stage where over 50% of the analysts and users in organizations have access to self-service tools for business intelligence. This is not surprising given recent reports that suggest that the use of such tools help businesses five times more likely to make faster decisions.
But it’s not just faster decision making that is driving the adoption of business intelligence tools. BI and analytics relies heavily on data to arrive at conclusions. Decisions based on data are likely to be a lot more predictable and trustworthy than those that are based off gut-feelings or consumer surveys.
Deploying business intelligence at work is however much more than simply installing a vendor software. BI tools are only as successful as you want them to be. The most successful businesses are those that see BI as a component of a larger process and culture change within the company. This change requires managers to establish processes that drive larger aggregation of data, processing them and actively pursuing insights from this data to arrive at decisions.
Identify your objectives
The first step towards a successful deployment of BI is understanding your business objectives. A cost reduction project, for instance, would require a system where capital and cash outflow data from all your various warehouses and distribution centers are available at a granular level. On the other hand, if your objective is revenue maximization, then your system will not only need data pertaining to the various SKUs in the market along with their sales and distribution numbers, but also similar data of your competitors. In other words, knowing your business objectives will tell you the kind of data you will need. This will help you establish a system that gathers this data. Without a working system, deploying a BI platform is meaningless.
Pick the right tool
The success of a BI deployment depends to a great extent on the software tools that you use. However, the best BI tool in the market may not be the right one for your business. Your evaluation should include various parameters like the cost of the tool, the size of business it is targeted at and the nature of deployment. According to this list of BI software tools, there are over 339 products available in the market today. While tools like Slemma BI and Grow BI are targeted at the price-sensitive customer, others like Rapid Insight focus on businesses that prefer Windows local installation. Pick a tool that meets not only all your feature needs, but also works within your budget and deployment requirements.
Effect a data-driven culture
Business intelligence is one of the best examples for the popular computing phrase, ‘Garbage in, Garbage out’. In other words, incorrect or insufficient input would bring about incorrect or insufficient output. The only way to break this cycle would be to drive a cultural change within the organization that focuses on gathering data at every level and bringing them together for the decision making process. This is not an easy thing to achieve, especially if you are an enterprise business with hundreds or thousands of employees. The lead, however, needs to come from the top and this push for data-driven decision making is extremely crucial to the success of a BI deployment.
Do you make use of business intelligence software at work? Share us your experience in the comments.