February 18, 2014 | Written by: Rene D. Svendsen
Share this post:
In the previous blog post in this series on cloud computing’s impact on programming models, I wrote about hosting in the cloud.
Cloud computing’s promise of elasticity and the ability to scale up, as well as down, presents a challenge to applications and existing programming models.
In this blog post, I will take a closer look at how to cloud orient applications though componentizing solutions so that each component is individually scalable and distributable to separate nodes, allowing the cloud to auto-scale each independently. I will talk about designing with application programming interfaces (APIs) in mind. Finally, I will talk about coping with failure and how developers may find themselves being held responsible for their resource consumption.
If developers follows common design principles like model-view-controller (MVC) separation, service-oriented architecture (SOA) and the componentization of applications with loose coupling between the components, it is not only good for sharing, reuse, agility and problem isolation, but also such an architecture lends itself very nicely to leveraging the benefits of cloud computing. A modular design makes it easier to scale and manage the solution building blocks independently. Whether it is following the newer SOA design principles, or more classical paradigms such as MVC, the approach of componentizing and dividing up an application fits well with the characteristics of cloud environments.
Following good componentization design is only the beginning.
Each component still needs to be written so that it can be replicated, work in conjunction with duplicate instances and interact with other components in the larger application. One of the biggest issues concerns data sharing, in which a component may not be prepared to share data with copies of itself.
Additionally, when applications are broken into components, the interactions between the components will need to be considerably more asynchronous. Rather than assuming that a request sent to a particular component will be processed immediately, the requester may need to be modified such that it does not block waiting for a response.
Componentization by itself won’t guarantee scaling. Two additional pivotal elements that enable this behavior are:
- Monitoring to determine when scaling up or down is warranted
- Automated provisioning to accomplish the actual scaling up or down
Often, an external service is monitoring the operational activity of the cloud application checking for resource starvation. As alerts are triggered by breaches in predefined operating levels, the management layer provisions additional resources.
These resources are then configured or activated to become part of the solution. In the scenario of a web application where an additional web server is required, the management system detects the condition, provisions another server, mounts any storage required for serving and is added to a common load balancer pool.
Traditionally, developers of an application or service would also dictate the interfacing aspects for their customers. This usually meant providing a custom graphical user interface (GUI) or client application. However, today customers are demanding access through their preferred access mechanisms. Custom-built applications support mobile devices (phone or tablets), or API style access for use by third parties, as well as browser based access to enable a broad user base.
This is true not just for accessing the features of a service but its data as well. Consider news web sites—rather than simply providing a web page with the latest news, most also offer their content through rich site summary (RSS) feeds, mobile applications and social media (Twitter, Linkedin, Facebook) most convenient for the customer, not the service provider.
Coping with failure
Dependencies between components need to be represented or mediated so that they are able to find and invoke each other without prior knowledge of their locations within the cloud (a desired component may be very busy, or even temporarily off-line, and unable to process the request right away). In addition, very large scale cloud systems are typically built using commodity hardware, with the expectation that some of this hardware will fail while cloud applications are running. In addition, some cloud providers offer implementations that span multiple geographic areas (either for performance or disaster recovery reasons) and this poses additional challenges.
Some programming models are more sensitive to this sort of location independence than others.
For example, some technologies such as low-level socket programming and web services are notorious for making assumptions about endpoints and might need to be supplemented by code or handled by intermediaries that can dynamically discover the correct endpoints.
Alternatively, some programming models are less sensitive to endpoints, and more tolerant of target applications moving about in the cloud. For example, the Java naming and directory service mechanism enables clients to look-up services such as remote enterprise JavaBeans (EJB) and Java message service (JMS) queues or topics without having to know where the actual EJBs, queues and so forth live.
New solutions address the potential for failure as a common event and not an edge case.
Open service component architecture (SCA), a newer programming model designed specifically for SOA, is inherently cloud friendly. Within an SCA domain, business components can wire to, locate and invoke each other with complete independence to the underlying physical or virtual topology of run-times and hosts, relieving the developer of having to deal with where components end up in a cloud.
Recovery-oriented computing is a design technique that assumes failures will happen and takes it into account at all levels of the architecture, design, development and testing of the application.
There is no free lunch
While cloud computing offers quite a bit of flexibility and a broad new set of capabilities, the developer must be aware that none of this is for free. Run-away processes on a personal server may cause an inconvenience but there probably is not any additional cost associated with this situation. However, move that condition into a cloud provider that charges you by the CPU hour and suddenly it can be quite a costly error. As with several aspects of cloud computing, the developer may find themselves in a position of becoming more of an IT manager than they were before. Part of this new role will be to monitor their usage of the resources they allocate to ensure they are using the resources wisely. This will also drive developers to look for the most cost-effective cloud provider based on their applications utilization of the provisioned resources.
In my next blog post in this series on cloud computing’s impact on programming models, I will write about leveraging cloud provided services instead of creating and maintaining these common services yourself.