Making the mental leap
Because applications are evolving, so are architectures and best practices. Figure 1 shows an example of classic Web architecture often followed in Java™ EE. In this architecture, the application server is viewed both as executing business logic and handling Web concerns. Besides executing business logic, applications running on an application server in this architecture have to:
- Generate HTML.
- Assemble the layout of HTML.
- Handle the flow of one Web page to another.
Figure 1. Classic MVC
Frameworks like Struts and JSF often provide management of these facilities. This architecture might still be relevant for many applications today, but there are downsides to this application model:
- These typical applications rely on HTTP session state. Over time, this leads to HTTP sessions growing larger and thus affecting the scalability of the application. The larger the session state, the more your scalability and performance suffers.
- Server memory footprint is usually much larger. Often times, frameworks maintain memory copies of the view of, for example, a typical JSF application that will create objects to maintain a UI tree (almost like a server side DOM) and any request scoped objects. This will increase CPU cycles with garbage collection of UI objects.
- Tightly coupled development is another disadvantage to this approach. Web page developers often have different skills than developers who create business logic. Web developers typically are skilled in rendering languages, HTML, CSS, and scripting. They often use tools like WYSIWYG editors, scripting editors, and browser debugging tools. They often make more rapid changes because they are trying to visualize Web pages and frequently move things around. Business logic developers are usually experts in data access, messaging, transactions, and integration, and use different kinds of tools.
Because the browser can now host Rich Internet Applications, it can be a first class service consumer. Toolkits like Dojo enable a full suite of UI tools. The browser can now manage things like layout management, Web flow between components, MVC, Web state, and other Web concerns. The application server can now be in the business of rendering data, executing business logic and business flow, and becoming more stateless and scalable. Figure 2 shows a modern Web 2.0 style application.
Figure 2. Modern Web 2.0 architecture
There are many advantages to this approach:
- View rendering logic moves to the browser, which makes the server more stateless and thus more scalable.
- Server memory footprint is reduced because there are fewer objects needed to maintain request level state.
- UI CPU cycles shift out to the browser so that the business services tier competes less for CPU/resources.
- Web development can be more agile. Because it is easy to mimic Web payloads like JSON, Web developers can now often get by with Web servers. Web 2.0 client developers do not need fully functional application servers and can often get by with a plain Web server or file system for unit testing. I have been part of several development efforts in which the Web developer developed 80% of the Web tier without the actual services being available. Integration testing cycles were reduced as well. Figure 3 shows an example of this pattern.
Figure 3. Web interfaces
In the enterprise, I realize that we cannot always be completely client-browser based for UI rendering. There are a few reasons for that. The most important one is secured logic. Because it is possible for the browser source to be viewed, there is just some logic you do not want to ship to the client. This is where server pages can still help. You can take your pure HTML content and wrap it in a JSP, for example, and implement some logic. Examples of UI logic that run safer on the server are determining what an end user can see on the browser. Figure 4 shows this example.
Figure 4. Server rendering for secured logic
This pattern is best when you want to hide the "logic" that determines what an end user sees. On initial load, you manage this with a simple server page tag talking to business logic.
However, I want to distinguish this from hiding data options that a user cannot see. For example, suppose you have to render a dynamic form and hide input fields based on user permissions. You can very easily build a JSON based meta service that provides only the permitted fields (shipping those fields only to the browser) and then have a generic client side renderer using technologies like DTL in Dojo . Figure 5 shows an example of this model.
Figure 5. Dynamic forms with meta-services
A simple server page wrapper in many cases is enough. However, developers often go too far in mixing classic Web style and Web 2.0 style architectures. For example, technologies like JavaServer Faces (JSF) provide a full component rendering model on the server. If you are going to stay within the black box, you might be able to get by with this model; it is assumed that the output in a server widget component model is a black box. Providers of server widgets can change the way a component is rendered in the browser.
Figure 6. Mixing rendering can lead to breakage and bulk
The next step in the evolution of this model is wrapping one toolkit (like Dojo) with another like (JSF). This often happens because people are more comfortable with Java and feel that if the underlying rendering model is a familiar toolkit, it can offset the black box issue. However, in my experience, developers often have to still become experts in both the server widgets and underlying client widget technology because they have to troubleshoot bugs and maintain the system. Therefore, you lose the value of being a pure Java developer, not to mention this model will make your architecture bulky:
- You might double the cost for memory both on the server and in the browser. The browser maintains the DOM with widgets and the JSF tree is on the server.
- You lose the interface-decoupling proposition. Web developers now need browser tools and a full Java application server to fully test.
Technologies like JSF can still provide value in scenarios where you support multiple Internet UI paradigms, like Open Web technologies, Flex, and Mobile App platforms. You might want to maintain a single set of components with various rendering capabilities, but these cases are usually specialized.
Figure 7. Server rendering should focus on multi channel
Web 2.0 maturity
Technologies like Dojo in the browser and JAXRS on the client are maturing. I have worked with several projects that have successfully delivered Web 2.0 style applications using pure Web technologies. Furthermore, using a framework like Dojo makes the code very maintainable and easy to evolve, more evidence that the frameworks have matured over time. There are various phases to your Web 2.0 maturity:
The User Interface continues to drive the bulk of applications on the Web. As such, getting your visual requirements around Web UI is key.
Figure 8. Visualize
UI designers and developers have to work closely together. UI designers must become experts in CSS to survive in this world. Once you visualize your data, this leads to the formation of the first iteration of your REST APIs.
Once you know how your consumers visualize data, formulating your REST APIs become the next step in maturity. REST is the design principal behind the World Wide Web, which was designed to serve up content to browsers in a scalable fashion. Content are Web pages, with links to other Web pages. The whole page is transmitted to users, hence, state transfer. REST-based Web services are designed around this principal of building stateless Web services that transfer resources to the client. Resources are described by a URI. There are many benefits to designing Web services in this fashion.
REST is about creating Web services around a set of constraints. Sticking to the constraints around using HTTP the way it was designed enables service providers to optimize around these patterns. By sticking to the constraints, you can better leverage the Web infrastructure. Routers, caching proxies (including the caching in your browsers), and Web servers are optimized to deliver optimal Web content. For example, by setting certain caching headers in your data services, you can use the browser as a free cache. Delivering your services through these channels enables them to be optimized as well. REST is about delivering SOA around RESTful principals, following the Web as an architecture.
When your REST APIs evolve, you can provide opportunities for new visualizations. Your consumers can choose your Web visualization, or act on the REST APIs and create their own visualizations. New consumers help evolve the REST APIs.
Figure 9. REST APIs
Social aspects are critical in Web 2.0. Providing REST APIs can help create social communities because your content is in a format that can be consumed through simple Internet channels. Many have realized this value. For example, my blog on developerWorks is available through an Atom feed. Because a REST API is provided around this content, I can now serve my blog through my Facebook page so that both my Facebook community and developerWorks readers benefit from this.
Figure 10. Socialize content
Socializing content is the next step in Web 2.0 maturity. Although you might not want all your content to be socialized to the public, companies are realizing the value behind the firewall through internal communities.
Web 2.0 technologies offer many benefits to your development process, the scalability of your system, and the reach of your data, but some of your own heritage systems might hold you back from reaching Web 2.0 maturity. It is important to begin embracing the Web as a platform and enable evolution through the adoption of modern architectures.
The author thanks Kyle Brown and Chris Mitchell for reviewing this article and providing valuable feedback.