Wayne Beaton recently mentioned a program on his blog which keeps Eclipse in memory for better handling and performance. I recently did something similar with Tivoli Performance Viewer. By using the -Xms and -Xmx parameters on the Java command I was able to increase the default memory allocation pool used by the JRE. This simple increase made my life much easier during a recent run of performance tests for a new portal we were putting into production. The tool was especially sluggish when trying to log the results of some of our tests to create a baseline, and this memory increase helped a lot.
So, what does this have to do with Portal? Actually, quite a bit. Because WebSphere Portal sits on top of WebSphere Application Server it's important to understand that your knowledge shouldn't just stop at portal. The more you know about WAS, J2EE, IBM tools, and related technologies, the better your portal architecture and design will be. Many times I see portal teams building some custom security or caching framework because they don't know what is already available on the platform. WAS supports a number of APIs and contains additions that are proven and supportable for use in your application. Because these technologies are not always described in the Portal InfoCenter, many teams believe it is part of the WebSphere Administrators domain. As a portal practitioner I know that as I learn more about WebSphere performance and security and how to leverage existing technology, my designs continues to improve.
The moral here would be to start to learn more about the underlying technologies associated with WebSphere Portal. Yes, I know it's hard. There is already too much to learn about portal as it is. Start at a high level, and simply be aware of things that might be useful to you as you plan your project or design. Try and start a brown bag program with your colleagues and have someone brings new knowledge to the table every month. You can do what I do and try to spend one hour a week reviewing what is new on the developerWorks WebSphere and WebSphere Portal Zone. Try and bookmark interesting columns for later study. I'd be happy to hear your comments about what you do to stay abreast of technology as it continues to speed past us as if we were standing still.
Have fun![Read More]
Portal in Action
I recently stumbled across the following article by Vance McCarthy, Today's Portals "Inadequate" for Web Services?
It was an interesting take, and while I agree with the title, I do so for very different reasons. I don't think portals can evolve that drastically in the near term. There may be new and different clients to handle complex situations that Mr. McCarty suggests, however, we will continue to use, and functionality will continue to evolve, with more traditional portals. Improvements in application integration, workflow, collaboration, SSO and the like are continuing, but the idea of a universal client is still a very important concept. In many cases the traditional portal can only evolve as fast as a standard browser.
What concerns me more about SOA and Portals is the design aspects around building a portal which consumes exposed services. Portals and portlets can scale very quickly, and integrating with any backend system can easily cause a server stopping bottleneck. See our article in the IBM WebSphere Technical Journal on Using the Command Cache to Improve Portal Application Performance for one approach to improving service integration performance.
As customers continue to get excited about, and adopt the concepts of SOA, Web Services, Web Services for Remote Portlets (WSRP) and so forth, it could spell disaster for an unprepared portal team whos job is to integrate with the many new services popping up around the enterprise. This is especially true when adopting several new technologies within your project, such as using Java Server Faces, which can hide the implementation from a less experienced developer.
Imagine a scenario where you create a portlet which acts as a web service client. This portlet displays some type of content that can be viewed in different ways, or perhaps show different topics depending upon the provided parameters. The portal team makes several copies of this portlet and places them all on the home page for your users to see when they log on. So, what you have done, in addition to multiplying the number of web service calls by the number of users you have logging in, you can now multiple the product of that number by the number of portlets on your page.
You have pretty much killed your back end if it is not designed to handle this many requests. Consider this effect carefully when starting the design of your portal. Caching, or limiting what is happening on a single page, especially the home page, should be part of the design effort. Service level agreements on both the portal and back end services must be carefully considered, as well as communicating to the service provide an accurate estimate of what load the portal will put on this service.
More on this later as we continue to assist customers with these issues...
Wow! I've been slammed with questions about JSR 168 and writing portlets. Much of it has been from folks having issues writing portlets themselves. In some cases after understanding the issues, it has been a matter of reading the spec to understand how the portlet will work in these cases. It's not terribly different from writing portlets using the WebSphere Portal API at least the concepts and design traits are pretty similar. I encourage that everyone read the JSR 168 Specification before they begin to write JSR 168 portlets or if they have issues. There are also a number of articles on developerWorks that can provide some examples and best practices.
The other problem that seems to be occuring are portlets that appear to conform with the specification, that do not seem to run on different containers. I am still investigating this one but as portal vendors enhance their own versions of the specification, this problem could get bigger.
Unfortunally, there is no easy solution to this, since the initial specification is so light in functionality (think WebSphere Portal version 2.1). A hard decision needs to be made between portability and taking advantage of enhanced functionality. Vendors will face this issue the most as they attempt to deliver full featured portlets for use on multiple platforms, that interface with their own system. For customers who are building their own portlets it may not be as hard a choice as they strive to meet user requirements.
Recently someone asked me a question about using SSL with WebSphere Portal. The question was how to switch back and forth between HTTP and HTTPS for different parts of their portal. This is a pretty common question, and my answer has usually been the obvious one, that it is not possible. But really, that is not the whole story.
The portal is setup to be protocol, and actually can be configured to be domain agnostic. This means that whatever protocol, and/or domain, you use to make the portal request, that is what the portal will use to create all return and navigation URLs. The common scenario for many portals has been to use HTTP for the public pages and then do an HTTPS redirect during log-on to provide a secure submission of the user credentials. At this point the entire protected portion of the portal, the wps/myportal part, is accessed using HTTPS. The portal provides this redirect ability out of the box by allowing you to configure the log-on command with the ssl=true attribute.
The WebSphere Portal InfoCenter has a whole section on setting up SSL with WebSphere Portal, what commands and configuration changes are possible, and why this is important.
Sometimes for various reasons the requirement is to only use SSL for more limited sections of the portal. While this is possible, the main problem is that once you are using a specific protocol, your somewhat "trapped", and it is problematic to switch back to the original protocol. There is some speculation that by using mapped URLs and HTTP Server rewrite rules, you could come up with some techniques to switch back and forth. Adding some hard coded tabs into your navigation or maybe a judicious use of virtual portals could also enhance your possibilities.
A better option would be the ability to tag each URL within the portal to provide a link that is either secure, not secure, or follows the current security protocol. Unfortunately, this capability is not currently available within the portal.
There is hope on the horizon, * I think *, the JSR 168 specification provides an optional attribute (secSecure) for the PortalURL object that has just this ability. While it wouldn't handle every link in the portal, like navigational links, it would offer a more programmatic way for developers to provide this switching ability.
After playing with the setting for awhile and not getting it to work, I took another trip through the InfoCenter and found the following tidbit:
Portlet URL security
The setsecure() method of the PortletURL interface is not supported. The portlet URL will always be the same security level of the current request. Likewise, the secure attribute of the
While this is a bit frustrating, it does show there is some thinking in the right direction. The attributes seem to be available within the portlet API and tag libraries, so in time this capability should become a reality. Since this is currently optional within the spec, the portal is still very much in compliance with the specification.
For now, where there is a will, there is a way, and if the requirement is being forced on your portal, then budget for the extra design and development effort to make it a reality, perhaps using some of the techniques I mentioned above. If cost and speed to delivery is a driving factor for your portal, then plan on an all SSL or nothing approach for the near future. There are good arguments that using SSL within your entire portal is the more secure approach for all of your traffic.[Read More]
I get my inspiration for some of these topics from whatever I may be currently working on, and also from many of the questions I am asked by my colleagues and customers working in different situations. This last week has been hodgepodge of questions, but maybe I can weave a central theme. I received a few questions about integration and migration, which are not always the easiest topics to discuss. There are many ways to integrate existing applications and content, from the simplest use of IFrames and pop-ups, to a more full featured approach of building new portlets as a front end for your applications and information.
A quick word on pop-ups. I am of the opinion that pop-ups are not always a bad thing. It is a semi-popular opinion with many designers and information architects that pop-ups are evil and reduce the usability of a site. I'm pretty sure this derives from the early days of web site building when pop-ups were all the rage and then quickly became out of vogue. From an integration standpoint however, pop-ups can be a viable option. Look at the following benefits:
Save tremendous cost, probably in the thousands, toward the integration an application that may provide little return on that investment
Take advantage of any single sign-on options in the enterprise, such as Tivoli Access Manager or SiteMinder.
Allow you to release an initial portal quickly with more functionality, and with fewer bugs, then waiting for your first large integrated release.
The point here is, that if you have some arbitrary rule such as "the portal shall have no pop-ups", make sure you understand the cost of that decision, both in time and money. I'm not saying everyone should use pop-ups, and I do know that they can quickly get out of control, but when used appropriately with the right branding and strategy they can be a very effective weapon in your integration arsenal.
OK, so what if you want to more fully migrate an existing application into a portlet? There are a number of options besides what I mentioned above. Web Clipping may offer some alternatives if you already have the application running on a different server. Streaming HTML content via the Content Service may be another option. For direct migration of servlets and JSPs you have to modify code. Servlets have to be converted to portlet classes, and JSP's have to be modified to encode parameters and use portlet URLs. Sorry, that's the way it works. : (
For performing different actions let me illustrate a simple case. Remember that renderURLs and ActionURLs are different cases.
An ActionURL will trigger a processAction() method and then the service() method will be called during the render phase.
A renderURL will skip the action phase and go straight to the service() method, which GenericPortlet will translate to doXXX() method.
For a simple app where you want to display several different JSP's it may be the case that you can just use a renderURL to trigger the change. Most of the work can take place in the doView() and the JSP itself. We'll discuss action handlers and best practices around them another time. Creating the URL is pretty simple. Here we create a simple render URL, add a parameter, and then place it on the request so it can be used by the JSP. The main alternative here is to use a bean to pass these types of values around in your portlet, but in this case we will keep it simple.
When the JSP is called it can access the attribute in the request and use it in a simple submit form. This example also shows how to build a simple button that can be pressed by your user to create some navigation action. This can easily be modified to switch modes or window states in the portlet as well.
Clicking on this button will trigger the service() method, and in our case the doView(). You can check for the parameter that you are expecting by using the getParameter() method on the request.
This should be enough to get most folks started working with URLs. Notice that I added a parameter to this URL, but I could have just as easily just checked the value of the form submit parameter "rendersubmit" for a value of "do render". Things can get a little tricky, but if you do it once or twice you'll understand the differences.
If you read Best practices: Developing portlets using JSR 168 and WebSphere Portal V5.02 you will see that I broke rule number 15 about not using a POST for a renderURL. I could have easily just used a link here or perhaps changed the form method to GET might have been easier. I just wanted to show a lot of different ideas at once.
I'll try and work up a full example using actionURLs and paste it here in the near future![Read More]
I had an interesting discussion last week. A consultant was on a project where the customer had a number of functional and non-functional or organizational requirements that were somewhat competing with each other. As I tried to wade through all the stipulations and provide some good feedback, I thought, there has to be a number of solutions to this problem that would provide the same or a very similar user experience, but none of them were going to really satisfy this customer. The design, and functionality of the pages and portlets had been predefined, as well as the deveopment API and framework, and the approach that the developer should use to build it. It was apparent all this was done without taking the portal framework and capabilities in mind. To be fair, there were a number of reasons for this list of decisions, any of which could be debated, however my main concerns were two-fold.
First, very strict guidelines were given that didn't take the portal capability and best practices into account. It was assumed that the software could be made to fit within the given parameters, and
second, the developer given this task, simply took the requirements at face value that this was how it had to be done.
So who is driving these requirements?
It has to be made clear that this should not be a one-way process, rather an iterative cycle using some type of feedback mechanism, or by introducing technical folks into the project life cycle early. If an architect is at a technically deep enough level with the product or technology, then they can often provide this guidance and assist the developer through how it should be done. Often this is not the case, and someone with deeper skills in the product suite needs to actually provide information, in real time, to the rest of the team about decisions. This goes for Business Analysts, Marketing, Designers, and others who are making decisions which can affect the technical outcome. Working with an Architect or Specialist who is knowledgeable about the Portal framework and how to best take advantage of that technology can great increase your chances of success. Or at least reduce the number of problems you may face. Enterprise standards often have to be revisited when introducing a portal into your environment. In some cases exceptions need to be made or perhaps additional standards that take the portal into account.
The simple moral here is to ask questions and speak up as early as possible when designing your portal. Ask the Portal SME on your team if a specific decision makes sense. Then give that person some time to research the answer and maybe offer some different solutions. What? you don't have a Portal SME on your team? Then shame on you. If this is your first portal project then, more shame. The expense of adding an experienced member to the team can well outweigh the cost of a failed or troubled project. Developers are not blameless here either. It is important that developers speak up when they see potential problems or know that something has been poorly designed. Remember, not only do you have to build it, and deploy it, but you may have to maintain it in the future.
Look to my next blog for a continuation of these thoughts and a discussion about the trade off between adding real business value and wants...[Read More]
I wanted to finish the thread that I stared in my last blog before I go back to more technical issues. I'm currently playing around with the preferences validator in JSR 168 and I think that will be interesting for my next entry. For now, let's complete this issue.
It is generally accepted that business requirements should drive functional and some nonfunctional requirements. The business doesn't want to pay for functionality it doesn't need or want, and likewise it should get the most bang for its buck in terms of portability, scalability and all the other 'ilities that are needed. However there are still many nonfunctional or rather organizational or architectural requirements that need to be determined, either at a project or an organizational level. For these issues I'm always a bit wary of the driver behind them and the value they may bring.
For example the use of Struts in portal sometimes concerns me. (Before the Stuts people start to feel picked on, let me explain.) There are solid business and financial drivers for using the Struts Portal Framework within WebSphere Portal. "Because it's cool", or "I want to learn it" is probably not one of them.
During any project I have a very simple frame of reference. What is it going to take to allow me to deliver this application within the given time frame and within or even under budget? In many cases the team is new to WebSphere Portal and J2EE technologies, and reducing the complexity or layers within an application and taking advantage of what is available within the portal framework can help now and as the application evolves.
I like the idea of creating standards across a project, or better yet, across your organization. The use of something like Struts can fit well within this ideal, but here again, there may be many exception cases. Is it going to cost you more in initial development, ongoing maintenance, and even future upgrade headaches to implement within these guidelines? Portability might be another reason to take this route, but it is probably not realistic to think your code will port seamlessly across platforms just because it's based on Struts. This will be more of a problem if you want to take advantage of specific portal functionality within your code.
I have the same concerns around some of the methodology's that a team sometimes want to use within their project. Some teams get really excited about using the Rational Unified Process and all the tools that IBM Rational can provide to assist in this area, or maybe they want to go another way such as using EXtreme Programming. With anything of this nature, your first question should be, what is the experience of your team? Are they skilled in these areas or are you willing to make the investment to get them to the level they need to be? Secondly what are your immediate needs. If you have to deliver a portal very quickly then many short-cuts will be made. How will this affect your methodology strategy?
Fortunately, many organizations are on the right track to seriously evaluate all the options and the value they bring. This provides the advantage that, either the requirement is really warranted, or the organization is willing to make the effort to ensure the team has the tools and training they need to do the job correctly. Don't think I'm against developers or teams learning new things, that's how we get better and continue to improve our processes and projects. However, learning at the expense of the project delivery date or capability should be weighed heavily. Trust me, build a few portal projects and the learning will come. : )
Recently, I've been spending time digging through the JSR 168 Specification to ensure I understand everything that is available. After spending years learning different API's it is time yet again to learn another one. One of the more interesting but probably overlooked aspect was the availability of the preference validator to ensure that user input is correct before any preferences are changed or saved.
I'll admit, I'm from the school of simpler is better. I like to keep things simple during a project. Most of this I think derives from my background as a development lead on projects where the team was usually pulled together at at the beginning of a project. In most cases the technology that we were using was pretty new to everyone much of my effort was spent in getting folks to a level of competence on the project. Plus with all the different skill and experience levels it was very necessary to keep the design as simple as possible to ensure success.
It's relatively easy to use but will require you to think a bit about how you will implement your portlet. This is a good thing, as it will force you to spend more time on the design of your portlet and how it might interact with invalid preferences. Here is a sample class that implements the validator class.
You can enable your validator by adding the tag to the portlet deployment descriptor. Here is the added tag, along with a sample preference to be used in by the portlet.
The spec states that the preference validator will be called before any preferences are saved or modified. Here in our processAction() method we can catch any errors that are thrown by the validator and set a parameter that can be displayed to the user. I simplified the code within the try block, but there should be a save in this area.
Obviously the use of the validator will need to continue to be discussed and how preferences are validated. Two very different approaches are the use of Syntactic and Semantic validation. (To borrow those terms from Rod Johnson's excellent book, "J2EE Design and Development"). Syntactic validation being simply validating that an email address has an @ symbol or that a zip code is 5 numbers long. Semantic validation has more meaning to the application being considered and ensuring that things like an ID number are actually valid, or that several values match the query being considered.
As a side note: I spent the better half of one morning trying to figure out why my PreferencesValidation class wasn't getting called. I was almost ready to call support, when I realized that I had it wrapped in an 'if' statement that wasn't getting called. Nice huh? Since we have all been there, I thought you might enjoy that little tidbit.
ACK!!! it's been too long since my last post. Got side tracked with life I guess! I'll fix that with hopefully a thoughtful discussion on unit testing in the portal world.
Having worked on server-side Java projects for many years now, and building portals for the last 4 or 5, I have really yet to see good unit testing in a portal environment outside of my own personal efforts. We like to talk about unit testing. In fact, I would bet most project plans have a line item for unit testing. Often unit testing is described as that magical thing that developers do right before the code moves to another stage in the project, like Q/A or UAT. Since we continue to discuss unit tests (real or imagined) in our projects, it must mean that we think this is an important topic, an important line item that can be used to improve the quality and reliability of our code. If only we knew the half of it!
Recently a customer asked me about using JUnit with their portlet development. I was pleased to get the question. It's only been asked a few times in the past and never with the real intention that it was something the group would like to actively pursue. Fortunately, like I mentioned I have been looking at this issue for a while, years actually, and working on some approaches to integrate unit testing with a portal project. For me this is one of the bigger barriers to running an Agile or eXtreme Programming portal project.
Most people who are involved with software projects of any type understand the need for good testing. Heck, for that matter, any user who has ever had to deal with a buggy piece of software can understand the need. Testing is designed to catch problems before software ships, or a system goes live. In some cases releasing buggy software can annoy the user and may result in you being labeled a bad programmer or your company identified as one who produces bad software. In more extreme cases, a buggy piece of code can result in lost revenue to a company, or may even be life threatening.
So what is a bug? Sometimes this is difficult to pin down. In general, any function that returns incorrect results could be considered a bug. But in most software products, we expand this definition to include any action or inaction which does not comply to known requirements. Rarely are the product requirements written to incorporate every possible input and output combination, so in many cases developers have to make do. There are some obvious and not so obvious cases that developers try to handle when building a part of the application. This can include:
Many of these types of tests are not enumerated within the requirements or test scenarios and are often something that the developer just assumes they must do. In addition, non-functional test must often be performed. This includes making sure that the results of a test are not only accurate for one user, but returns consistently accurate results as many users access the same code, or that the results displayed to each user of the system are correct for that user only. Developing an application that works correctly at this level can be a difficult task. For example consider the following:
You see from this simple example that a balance needs to be achieved between performance and functional correctness, however without testing this can never happen. There are many types of testing that can be performed within software development. The figure below illustrates a realistic view of the types of testing we could generally define as necessary on a development project.
In reality there is often no hard and fast set of rules for the types of testing that are need to be performed within a project. The illustration above shows the layers or different types of testing that you might see on a common project. Different project teams will often put a different emphasis on some areas. Many projects will focus more on performance while others may not give performance a second though, at least not until the site is live. Unfortunately, that a topic for a different post. Most of the layers you should readily understand, but for clarity sake I will enumerate them here.
Whether you perform all of these types of tests, or even add more testing layers it is important to know what you are expecting to gain from a testing cycle. Without this end result in mind you are wasting time and money and possibly jeopardizing the success of the product. There are many great books available on testing strategies, which can help you define a good process. It also interesting that each layer often corresponds to a separate environment, which reduces competition between testing cycles and teams on very active or tight projects.
What is a Unit?
Getting back to the idea of a unit. We typically look at a unit as the first link in the chain consisting of several types of tests. This provides a way for developers, within the comfort of their own world, to assure themselves that the smallest unit or component of a system functions as they expect. This assurance also provides the base for additional higher level testing as the system moved away from the center circle to a broader, more aggregate view of the system.
A simple definition of unit testing would be the ability to test the smallest unit or component of our code that is possible. While this is the essence of what we want to accomplish, it can be a little misleading. Most of the testing that occurs within a project, if testing occurs at all, happens in large chunks at the functional or user acceptance level. Looking behind the glass we can come to the rationalization that there are more fine-grained ways to validate the behavior of our code. If assurance can be made that the behavior of individual components is correct then it stands to reason that as those components are brought together, or integrated, then it will function correctly at the higher level.
So unit testing can actually make our code better and ensure that a component behaves as expected. But there is more to it then that. A pleasant side effect of developing and running unit tests on a continuous basis can help ensure that on going changes made to our code don have an unexpected effect of breaking earlier working pieces.
On different projects I have often had developers write out manually the list of tests that their code should be tested with, before they start coding. But most of these tests would really fit within the scheme defined as integration or functional tests, since values would be input into a web page form, submitted, and the return results examined for correctness. Is this valid as a unit test? How about this process combined with the concept of a portlet? In theory some portlets could be considered discreet components of code that can be tested as a single functional unit.
In many cases it is not realistic that -portlets are modular enough to be considered a single unit. However going to the other extreme and testing at the method level may be too fine grained in many cases. The result is a fine line of determining what to test, and how to get valid results. Later in this discussion we will embark upon the process of actually writing some tests at different levels. Building upon that, we can write tests that test at both the method level and at a higher level of abstraction to examine the behavior of the portlet (servlet) controller and action handler.
Before the Web and server-side programming, unit testing was a not as difficult to pin down. Most applications had a main() method of some type that could be run on the command line. Building unit tests that had knowledge of what the application was trying to do was a simple matter of building test classes that ran our program with the advantage of being able to see inside. With the advent of the web and J2EE programming, our components now run inside a container that creates or instantiates our classes and then calls specific methods as required. In addition the container provides objects of it own as parameters to our methods that contain the input and output values for processing. Wel talk more about in-container vs. not in the container types of testing with several chapters dedicated to each type.
White-Box vs. Black-Box
When we do functional testing we are treating our code like a black-box. We don really know what happens inside the code, we simply enter the required input and examine the results. White-box tests by comparison are written with the knowledge that we do know what is going on inside, and even accesses internal values. We understand what is being created and where values are being stored. In reality, unit testing encompasses both of these approaches, and neither should be ignored when trying to design a good set of tests. For example, let say that a developer is designing a process that allows a user to enter two values into a form. The form when submitted will return the results of an action performed on those two values. The developer may want to write a set of tests that directly execute that particular method based on the knowledge that he or she has on how the test operates or should operate. If there are several branches that the method should take based on the values or differences in value, then tests may be written that directly exercise all of those different cases. On the other hand, this method may be used within a larger process that could be tested as a whole rather then just the particular method. In this case the individual method is tested in more a black-box fashion.
Keeping Developers Sane
Both black and white box testing approaches are valid and help ensure that as a component evolves, and the design is refactored, previously working functionality remains correct. As new tests are added to a test suite this doesn detract from work that is already done. This is the basis behind test-first or test-driven development. A methodology in which the test is actually written first, and then the code is developed. In a test-first approach the developer actually writes an initial test, which will fail, and then the code to make the test pass is then written. Once a test passes it should never be allowed to fail again. This means that working code is always working, as the system evolves. In theory, this provides a pretty solid based upon which a developer can work peacefully. Knowing that his or her code works at this level and having a complete set of regression tests to rely upon as new changes are made, will let a developer quickly ensure that they have not broken something in the quest for new functionality.
I recently performed an architecture and design review on a portal project for a large company. One of the suggestions that came out of this review was the suggestion to take a serious look at one of the major portlets on the site. This portlet was built in a monolithic fashion and comprised most of the site, as it existed. The main controller class of the portlet consisted of over 50 methods that were called as a result of a single large action handler. One of the issues facing the developers who had inherited this code is that they were afraid to make changes as new requirements were requested by the business.
This is an all too common predicament that teams find themselves in. Inheriting code that they are afraid of touching, or actually afraid of breaking. It is interesting to think what a good set of tests might do for this situation. Of course hindsight is always 20/20, so I hesitate to offer the thought of what would be the case had the original developer written a set of tests for the code. I would bet that the code might have had a different design, because thinking about unit testing can make a developer think more clearly about how the system is designed, but more realistically the next set of developers would not have been quite so afraid of adding methods or making changes knowing that an existing set of unit tests could be run to ensure they did not break anything in the process.
Agile developers are some of the biggest proponents of unit testing and as such many support the idea of test first development. Test first development is a great way to ensure that a developer fully understands what they are creating before they actually code it.
Generally, this testing process can be automated using a testing framework such as JUnit. Of course all this testing can come at a price. Not all testing frameworks are created alike, so training or some configuration may be necessary before test first development becomes seamless within your environment. Also, if the development effort is taking advantage of an application framework such as a portal or e-commerce server, the testing framework may not integrate well within the environment. Finally, to effectively conduct test-based development the entire team needs to be skilled enough to design and code appropriately. All of this is exatly what this set of postings is about. Providing you with some basic information and tool needed to perform this process easily and automatically within your environment.
In-Container vs. Mock Object Testing
Most developers or architects when initially confronted with the idea of unit testing are convinced that in-container testing is the way to go. I know that initially I went through that process and spent several months trying to determine the best approach to accomplishing this goal. In-container testing has the goal of performing tests while the code is actually running within the servlet, or in our case portlet container. While at first glance this appears to be the preferred way to run your tests there are many considerations in adopting this approach. Let start with the several obvious advantages to running in-container tests in your environment.
These points sound obvious at first, however it important to realize that your goal is not to test the container. Hopefully this has been done for you, before you deploy your code. In addition, not all containers are created equal and even configuration differences between machines and platforms may affect the results.
Now let me outline some of the initial disadvantages of an in-container approach.
It not really my goal to force you into one particular approach or the other. That why I'm discussing both approaches here along with some of my personal recommendations around building your processes with whichever direction you choose.
So if you don perform tests in-container, what do you do? Mock objects provide us an alternative way to build our own container. Essentially this can be accomplished so our portlet doesn even know the difference. There are a variety of ways to accomplish testing outside of the container and most of them involve some types of mock object scenario. Depending upon the complexity of the approach these objects could be officially defined as stubs, proxies, or mock objects.
Ok, I've rambled on enough. In my next post I'll continue this discussion and try to actually add some code that will show you what we are trying to accomplish. The approach that I have in mind is to build a simple set of objects that will function as a sort of mock portal within which our portlet tests will run. Since we will focus on portlets developed using JSR 168, even those of you who aren't using WebSphere Portal may find some of this interesting. In addition we can talk about building objects on the fly using a dynamic proxy framework such as easymock. I think you will find all of the approaches quite understandable and even fun in many cases. Until next time!
OK, so last entry I just talked a lot about testing and what Unit Testing is all about. You should really read these entries in order to understand my complete thought. Here is a link to the last entry if you have not read it yet.
Link to last blog entry
Now let's actually see some code and how we might approach unit testing from a portlet perspective. For all of you J2EE programmers who have been using jUnit on your projects this may seem pretty basic, but for portal programmers, a lot of this is new ground. Remember I'm focusing on JSR 168 examples here, so even if you are not using WebSphere Portal (which I can't imagine would be the case), you may be able to learn from these discussions.
OK, so what are some of the first real steps, or the benefits and challenges of unit testing portlets. Let's start slowly and explore how a developer might begin writing some tests for a simple portlet, and how testing can affect the development effort. Ideally we would like this to affect us in a positive way, perhaps by making our code better. At the very least it should improve our process, or the quality of our final product. In the realm of test driven development we can perhaps achieve all of these goals. We would like the idea of unit testing to allow us to design code that is easily test-able and easier to re-factor when changes are necessary. If the quality of our code is improved as a side effect then we'll take it.
First Unit Tests
Imagine that you have just stared a new job as a portal developer at ACME Development Corp. and have just been given your first assignment on a project. You are provided with an existing portlet that was built several months ago, by a programmer who has since left the company, to provide some much needed functionality for the ACME Intranet Portal. Your assignment is to enhance this portlet with new capability and increase its value to the company. The portlet is very simple, but it is one of the most used portlets on the portal, and any loss of functionality would be disastrous for both the company, and your budding career.
Looking at the figure above we can intuitively see how the portlet works. A user enters some number, in miles, and the portlet returns a number of kilometers that are equal in distance to that number. OK, wee not sure why this portlet is so important to the company given what it actually does, but then again, wee not sure why you work at ACME corp either!
You know that you really don't want to screw this first assignment up. And the boss hasn't give you a deadline for the changes you are expect to make, so after examining the portlet code to be sure you understand how it works you are going to spend some time writing some unit tests around the existing functionality to ensure that your new changes don impair current function and introduce new problems.
ACME Portlet Controller
Reviewing the code we notice that it been designed, or at least re-factored at some point, in a way that allows us to immediately work up a simple test. This is a nice convenience and let us get going right away setting up our first example. As we progress through out example we can come up with more interesting ways to test full portlets, but for now let hit the basics. Take a quick look at the portlet controller for our sample portlet.
The next section of the code is the doView() method. This method does some setup and invokes the JSP that will render the display to the user.
Notice that the doView() method contain no real business logic. It simply dispatches the VIEW_JSP as defined above. The processAction() method is where the real action happens as you see in the next section.
The processAction() method waits for a submission by the user and then calculates the correct response to return to the user. It sets the calculated value in the session for display by the JSP during the render phase. This method also makes use of the miles2Kilometers() method discussed below. Hopefully this isn't too complicated an example. My goal was to make this example as simple as possible and still maintain a reasonable example.
The getSessionBean() method is a simple abstraction that provides the session bean to the processAction() method.
The miles2Kilometers() method is one of the key methods within the class file. While a relatively simple method to work with it would be quite easy to make a change that may affect the return result. Now that you have an idea about how the portlet works we can see how we might build some tests around the existing behavior.
Writing and Running the Test Case
Remember, that this code was handed to us. In our case it a pretty nice place to start. The previous developer must have refactored a little while working on it because there are some good starting points for us to work with. Looking at the controller that was outlined above, the first question is, hat do we want to test? Common testing best practices often resolve around the idea of testing behavior rather then specific functions. In our case we want to setup a test that ensures the Miles to Kilometers behavior does not change as we make additional enhancements to the portlet.
Unfortunately we don't know yet to test our portlet container methods such as processAction() and doView(), so we will stick to the simpler case of just testing the helper methods. Let's write a simple test class that exercises the miles2Kilometers method.
You can see that this tests runs the method and checks a simple calculation. It is enough to give us the green bar for now and get us started down the path to better unit testing.
I haven't really talked about jUnit or how to setup and run things in the Rational Software Development Platform. I'll put together some notes on that for my next entry and we'll continue to see how we can setup to additional tests to shore up our portlet framework.
The Sample1Test class exists in a different package then the portlet, but in the same Source directory. If you right click on the test class you can run as a unit test to see similar results as seen here.
Next entry we'll discuss this a bit more and see how we can change or enhance the portlet to be more useful.
Still on the testing topic, I seem to be doing more and more performance testing now days. Don't get me wrong I would much rather assist a customer with testing their portal before it goes live, then come in to help fix problems after the system is already in production. There is a great article that was just published on developerworks on making your code better.
Performance considerations for customer portal code
But what about testing the infrastructure? Often we recommend to customers that they test out their infrastructure before deploying their application, to help eliminate any installation or configuration problems. Once the application is deployed, then things become harder to diagnose. This type of baseline infrastructure testing also lets you stress the supporting components of the portal such as the database and LDAP.
How do you test? I like to recommend what I call the 4-tab test. This derives from the fact that most intranet portals consist of 4 tabs (home, my work, my life or career, and my company). Of course if your portal Infomration Architecture is different this can become the 6 or 8 tab test. Having test uesrs log in and click on some of these tabs can give you an idea of how your system will perform and what kind of page rate you may get from your portal. If you are getting poor results with this type of baseline testing, then you will definitely get poor results once the application is deployed. The key is to flush out any problems with the infrastructure as early as possible.
If you already have the application deployed and are doing some performance testing before you release the portal to your customers, there are some steps you can take if you run into problems. If the portal is slow logging in and displaying the home page then remove all the portlets on that page or put up a new blank homepage to determine if the problem is with the login, or one of the portlets on the home page is taking a long time to load.
It is important before doing any infrastructure testing that you get enough test users into your system to fully represent your targeted audience. Using one or two users over and over during your load tests will not fully test the system as caching may occur at different levels.
Turn on logging if you think the portal is having a problem. The infocenter is a good place to start for information on setting up and viewing the portal logs. One additional tip, it is usually not a best practice to just make every setting change that is listed in any tuning guides that you may have. Understanding how a tuning guide change will effect your configuration takes some thought and understanding of the portal environment. I once had a customer who turned off logging on the portal because it was one of the steps in a tuning guide. Ack, this gave us fits when we started having problems and we couldn't figure out why nothing was being displayed in the log files. :)
A simple question for some best practices when migrating existing J2EE or .Net applications into portal was asked of me recently.. As WebSphere Portal continues to mature and some of our practitioners become more experiences, we tend to forget that some of our customers are just getting started with WebSphere, Java, J2EE and WebSphere Portal. Even the evaluation of this technology can prompt some basic questions in helping to make a determination for migrating from an existing platform to the portal environment. In hindsight, this question is not so simple. This topic can be very big, so I'm sitting here on a Saturday morning in a coffee shop, struggling with how to break it up into manageable chunk across several blog entries. For those of my readers that don't know, my kids (4) go to German Immersion school on Saturday mornings, which gives me several hours of me time to sit and drink coffee and catch up on things. Much better then mowing the lawn, don't you think? And no, my children don't like going to school on Saturday.
OK, back to the topic at hand, I think a reasonable high level breakdown may be to discuss this topic in terms of Migration vs. Integration of applications. Let's ignore any new development that you may be considering, because that requires us to ask a different set of questions, such as should we use JSR 168 or the WebSphere API, and what aboug using JSF, Struts, or Bowstreet, etc. In reality some of those same questions should be asked for any migration effort, but to keep it simple we'll skip that for now. I've heard these two options of Migration vs. Integration discussed in terms of Portlet-izing vs. Portal-izing, but I think that's mostly just names that architects make up to impress customers. : )
Fellow blogger, Wayne Beaton really focuses on migration within his practice, however I think that in many cases portal may be a different animal from some of the engagements that he encounters. At least we tend to look at it differently because we try and take advantage of all the existing tools and functionality available within portal to make migration easier. This is even more true when we start to talk about integration. I encourage you however to read some of his blog entries because hey may have some advice on a question you are facing.
For a migration effort the determination should be made that the code will actually rewritten into some form that the portal can run. This may be a portlet of some type, either JSR 168, JSF, Struts, or perhaps a Bowstreet portlet. Once that is determined then ensure that your team understands the basic of both the framework and portal to ensure that you are designing an migrating applications in a way that can assist with ongoing maintenance and will perform as you expect it to once it goes live in production. I blog about production and testing issues occasionally, so you might want to look through old blogs to see some of my thoughts on these topics.
Java and J2EE
If you have existing J2EE web application then you may be in the best shape for a migration effort. Remember that the portal API or JSR 168 and the servlet API are very similar. There are of course differences and you will have to rewrite most of your applications to run correctly in the portlet container, however much of the difference is in the packaging. Also, your team may not need as much remedial training in Java and J2EE, with a focus mainly on the Portal and Portlet API that they will be doing.. Depending upon how your application is designed some components such as shared services or helper classes may be migrated over as is, or at least with minimal changes. EJBs may not require a lot of changes other then deployment descriptor updates or repacking.
Overall there is still some work to do however if the application is designed well it may be minimized with a lot of cut and paste, rather then the rewriting of the business logic. If the migration effort is more then just moving the functionality from one platform to the other, and the requirement is to enhance or improve existing functionality, the of course the effort will be more extensive. However you are clearly moving in to the realm of new development rather then migration in these cases. There are some integration approaches that can be taken here also, but again I'll defer those ideas for another discussion.
On more then on occasion I've been asked about using the .Net engine that comes with WebSphere Portal. OK, so really, to my knowledge there is no such thing! Really! If you don't understand the difference between .Net and J2EE and why this is the case, then you should really consider some additional training or have some in depth discussions with your technical leaders. For existing .Net applications there are several options with integration and migration. One option may be to turn your application into some type of service and then create a portlet that works as a consumer of that service, but this is really under the purview of integration rather then migration. For true migration it will be all out effort of re-coding the functionality in Java to run within the portal. This can be a risky effort, because it often means that your development team is more experience in .Net and less experience in Java. The prevailing recommendation is training, training, and more training, plus the addition of some deeply skilled services assistance to help in design, and guiding your development team during the effort.
Many management teams assume that a simple portal or Java class is enough, but getting your team skilled enough to release a major portal requires training in many areas and even then, in many cases it is only by experience that some things are learned. Think about some of the problems that you have overcome in the .Net world and how many months or years it took for your team to become as good as they are today.
These comments are not meant to scare you off, and there are many very good reasons to start making the migration from .Net to a J2EE platform. You are probably asking these questions yourself about moving your applications. We have customers every day that successfully make this kind of migration effort, however they often take their time, and do things in small and manageable steps.
Struts and JSF Applications
IBM WebSphere Portal has a strong story in the use of Struts and/or JSF in building portal applications. However all it not as it seems in this area. The use of these technology stems from clients who want to take advantage of some of their strengths such as using an XML file to configure navigation and the use of widgets in building user interfaces. To accomplish the use of these frameworks however there are different adaptions then teams may have used in a previous applications. For example with Struts in portal IBM has released the Struts-Portal Framework which is obviously based on struts but provides some specific IBM classes to accomplish the effort. For JSF there is for example the use of the FacesGenericPortlet which extends the GenericPortlet class to accomplish JSF integration.
The point is that you should be aware that your existing application will probably not be a simple repackage into a war file and deploy. Some actual code migration will be necessary and parts of your portlet may not react as you expect depending upon the actual functionality of your portlet. In general though there are some good reasons to take advantage of these frameworks, for example if you already have teams that are still in using a particular framework you can leverage those skills with some additional training to migrate quicker and build new applications with greater ease.
Some Overall thoughts on Migration
OK, I hope I haven't painted overall, too gloomy a picture. Remember we are talking about migration. In many cases our customers are migrating existing applications from platforms that don't exist or are not supported anymore and they have to undertake this type of effort. Or perhaps they have a homegrown system and want it migrated to a platform that will allow them to grow and enhance the system easier and faster then before.
What about Integration?
As I mentioned, Integration is a little different although it may overlap with some of the thoughts we had here. In many cases it may be easier to integrate application in the easiest way possible until they can be replace by native portal applications or removed all together. If you have a migration story that has gone well or not so well, I would love for you to comment on it. Be honest and let us know what issues you faced and what problems you had and how you overcame them to a successful conclusion. If you have questions about specific migrations feel free to ask that here also and I'll do my best to comment. I'm really looking for general information at a platform level. If you have a specific issue or problem, I probably won't have time to research and discuss a problem of this nature, rather I may only be able to point you to some resources or ask you to open a support ticket.
I hope this helps begin to answer the migration question, look for more discussion over the coming weeks. Until then remember, the trick is to have fun!
A couple of requests have come in for more information about caching portal pages with a caching proxy such as WebSphere Edge Server Caching Proxy. A recent article published on developerWorks at: High Performance WebSites with WebSphere Portal, discussed in great detail different forms of caching that are available and some of the steps that you need to take to make page caching happen.
I have been playing with this for a while getting ready for a presentation on using some of these new advances with portal. It's actually kinda cool and provides new capability when used within your portal design. Understand that only complete pages are cachable at this level.
One issue that was raised was a potential conflict with users who bookmark different pages and then send those links to users in different geographies. If the recipeint uses a different host name to access the portal through their own local edge server, then the bookmarks might route them to the wrong place. One idea I had was to stick an HTTP server in front of those edge servers and use rewrite rules to redirect folks back through their local edge instance. I love using rewrite rules, even though I'm not very good at it. I use them sometimes for controlling access I usually just hack away using some examples until it works. I'm always getting caught in redirect loops where the server just hangs.
Testing out this scenario, I was able to add the following section to my http.conf
If you take a look at the rewrite conditions, I'm basically looking for any users coming in with a specific IP address who are not using the edge server to access the http server. If that is the case, I do a simple redirect back to the edge for all subsequent calls. It appears to work ok, but I did run into some trouble not finding some bookmarked documents. I'll keep playing with it until I get it working well enough to recommend as a potential solution.
Anyway, I thought this might be of interest to folks at least from an idea standpoint. If you have experience or just want to make fun of my regular expressions : ) feel free to comment.
FYI: Another article on portlet and portal caching showed up on devWorks this week. Feel free to check it out!
Have fun![Read More]
A few months ago I wrote an entry about development frameworks and things to consider when making a framework decision called
Development Requirements and Business Value.
This discussion has been an ongoing struggle with me as I continue to learn, and discuss with others their experiences with difference approaches and different frameworks. A lot of my previous experience is from leading development teams through projects on consultant engagements, so I am aware of the struggles that team members and development leads go through in making these decisions. I don't believe that a project manager or "hands off" architect can make an arbitrary decision to use a framework without understanding the massive impact it may have on team members. Many things must be taken into account, skill level, training, project requirements and the time line for a project delivery.
Before going to far, I'll mention a few of my colleagues Skyler Thomas, Brad Bouldin, Tim Hanis, Svetlana Petrova , and my manager Ken Polleck. All of whom have spent time recently discussing this topic with me ad nauseam, helping me define some of the thoughts below.
Most folks agree that in some cases the portal API can be a bit limiting, especially with the move toward JSR 168. Many developers who come from a Struts background want to continue with some of the features such as validation and page navigation that Struts made so easy to handle. JSF continues with that good stuff and adds more in terms of reusable interface components that can be used to quickly build up an interface and bind data to back-end methods. As more development teams become familiar with JSF they become enamored with the idea of making developers more productive by allowing them to use the tooling, specifically in Rational Application Developer to quickly build up portlets that can do all sorts of cool things.
The obvious thing to say is that JSF is great and everyone should move to this framework, however I'm always a bit cautious about new things that are supposed to revolutionize the way we do things. I am devoting and will continue to devote some of my time to ensuring that JSF is used in a correct manner and not just because it is a new great thing.
Don't get me wrong. It's pretty cool and I expect that more and more folks will grow into these frameworks as they evolve more with WebSphere Portal. With that, I've come up with two rules around using JSF that have to be kept in mind as you move forward with your project. Actually as Tim Hanis pointed out to me, these rules are not specific to portal or even JSF in many ways. They can apply to any framework.
Rule 1: JSF allows you to quickly build portlets within a framework and using the components. BUT, using JSF implies restrictions. Mostly that you use the components as they are designed.
If you need new functionality, then you have the option of changing the component to include the additional functionality, or letting a developer figure out a work around. Both of these require more expertise, skill, and time. Sometimes more then if you did not use JSF in the first place. (IMHO) This also requires that your advanced developers need more training and are working on these enhancements.
Rule 2: The Tooling for JSF is not intelligent enough to always build a well designed application. Using this tooling to build a portlet may result in problems within your production environment if the design is not evaluated by more experienced folks.
Corollary to Rule 2: Tooling can't help a poor design. For example if you store data that a component uses (like a list box) in the session instead of taking another design approach. Less experienced developers may not even know there are other design approaches.
My favorite example of this is when folks build a portlet that uses the web services client. Inexperienced folks will not worry about the fact that this portlet will access the service for every user. In some cases this may be OK, but in many cases it could overtax your portal and may lead to performance issues.
JSF is pretty still pretty new and so the jury is still out in the portal world as to how effective and worthwhile it has been. One of my favorite questions to development teams is, "how did using Struts (now JSF) improve your effort"? and "Do you think you could have been more productive without it, using a plain portlet API?". The results are often interesting.
I (and I'm sure others) would love to hear and learn from your comments about which framework you are using, why you choose it, and what the outcome has been?[Read More]
JoeyBernal 1200007EAE 697 Views
Living in Houston, I was able to get my family out of town for a few days in the anticipation of Hurricane Rita. As the storm moved closer toward the coast we could see that it was probably going to miss Galviston and Houston directly, but our other concern was what might happen after the storm. The loss of electricity and possibly water for days or weeks would have been devistating to a city as large as Houston. We all saw what happens when several million people start filling up their gas tanks and leaving town, so even if you are clever enough to purchase a generator, there is no guarentee that enough fuel will be availble to run it for an extended period. That combined with the recent heat wave in the area could result in very uncomfortable is not dangerious surroundings.
OK, enough about me. This event got me thinking about the hosting facilities and internally hosted applications and sites within the greater Houston or perhaps the entire Gulf Coast area. One might argue that the Gulf Coast is not a good place to locate your hosting facility, but every place has its issues. Earthquakes, hurricanes, tornados, etc.. No where is absolutely safe. There are really two points of concern.
1. Do you have a plan in place for when the power fails for an extended period. This includes the people that manage the facility? In many cases they have their own families to consider and keeping your site running may not be their first priority.
2. Do you have a plan for rebuilding your portal if the machines or building were destroyed. Is your site version control in place with configuration data ready to be rebuilt with minimal effort and testing.
The optimum goal would be to have a redundent site in a location somewhere across the country. Many major companies do, but does yours? If not, then you have to rely on a single hosting facility. One might think that if the local office is closed it's not that important, however even if the local office is closed, you may have employees or customers world-wide who depend upon the application, affecting revenue and processes until the site is back up, or workarounds are designed. Imagine a case where a nationwide organization hosts its Intranet portal in a flooded area. Most organizations use their portal as a vehicle to inform employees of ongoing issues or information. They may also supply HR, Health and Life Insurance, Contant numbers for asstance, and other important information through this portal. If it's not availble for an extended period then a major communication channel is disrupted. Some things to consider are:
1. Ensuring that your current release is backedup and can be redeployed with mimimal effort
2. Backing up your current portal configuration data, properties files, WebSphere settings, etc..
3. Backing up your portal database, with personal configuration data, ACLs, and layout data.
Unfortunally, none of this is easy. It really requires a comittment and hard work to ensure that this is done correctly and in a reusable manner. A good place to start looking for information is with the redbook WebSphere Portal V5 Production Deployment and Operations Guide.
My point here is really just to raise awareness, while we don't want to detract from the suffering of people during these times of crisis, getting your organization back on its feet can only help to provide a sense of normalicy or perhaps assist people in the ongoing recovery.[Read More]