Achieving high availability with IBM Lotus iNotes

Learn how to achieve high availability for IBM® Lotus® iNotes® through the use of a software load balancer or hardware such as an Application Delivery Controller (ADC) in conjunction with IBM Lotus Domino® clustering. This article discusses some of the challenges related to properly load balancing Lotus iNotes to ensure high availability with the use of an ADC.

Vinod Seraphin, Senior Technical Staff Member and Architect, IBM Corporation

Vinod Seraphin is a Senior Technical Staff Member and lead architect for Lotus iNotes, which was "born" from Vinod's prototyping efforts to develop a compelling personal information manager (PIM) within a browser. He has been with IBM since 1991. Prior to working with Lotus Domino Web Access, Vinod was the Software Architect for Lotus Organizer®.



Jack Ciejek, Advisory Software Engineer, IBM

Jack Ciejek is an Advisory Software Engineer and works as a developer on the Lotus iNotes product. He has been with IBM since 1988 and, prior to working on Lotus products (including Lotus SmartSuite for OS/2), has worked on various IBM mainframe operating systems.



Rahul Garg, Staff Software Engineer, IBM

Rahul Garg is a Staff Software Engineer and has been a software engineer on the iNotes team focusing on complex configurations and customer deployments. He has been with IBM since 2005.



Nirmala Venkatraman, Performance Architect, IBM

Nirmala Venkatraman is a Performance Architect on the Lotus Domino server performance team. You can reach her at nvenkatr@us.ibm.com.



Craig Scarborough, Business Development Solutions Engineer, F5 Networks

Craig Scarborough is a Business Development Solutions Engineer for F5 Networks. He has more than 20 years of experience in networking, storage, and security. Craig is responsible for advancing the IBM/F5 partnership by identifying innovative solutions that leverage technology to meet client and market demands.



Ron Carovano, Senior Business Development Manager, F5 Networks

Ron Carovano is the Senior Business Development Manager with F5 Networks. Responsible for the global alliance with IBM, Ron oversees solution-development efforts conducted between F5 and IBM. He is also responsible for aligning F5’s business programs with IBM’s go-to-market initiatives.



06 May 2010 (First published 04 May 2010)

Also available in Chinese Russian Portuguese Spanish

Editor's note: Know a lot about this topic? Want to share your expertise? Participate in the IBM Lotus software wiki program today.

Introduction

Mail is a critical corporate application, and users and administrators expect 24x7 mail availability. With IBM Lotus Notes® and Lotus Domino, high availability is achieved with Lotus Domino clustering and mail file replicas. The Lotus Notes client contains explicit code to cope with scenarios in which a Lotus Domino server with which it is communicating is no longer available.

After a slight delay (as it awaits an attempted connection or communication to complete), the Lotus Notes client notifies the user that the server is no longer available and asks the user to confirm that it is to switch to another known replica server.

With Lotus iNotes, however, high availability requires the assistance of some additional infrastructure, such as an Application Delivery Controller (ADC). ADCs offer many benefits in the data center, including these:

  • Improving application reliability (through application monitoring and load balancing)
  • Enhancing security (application firewalls) and acceleration (TCP and HTTP optimizations)
  • Off-loading servers from some of the more compute-intensive tasks; for example, Secure Sockets Layer (SSL) encryption

Lotus Domino includes technology called Internet Cluster Manager, which is able to redirect the initial URL for an application to one of several back-end mail servers. If the actual mail server becomes unavailable after a session is started, however, there is no mechanism for recovering and switching to another available server that also contains a replica of the mail file.

When configured properly, a load balancer can provide users with a seamless failover experience. Users never realize that the original server with which they were interacting is no longer available; rather, the load balancer seamlessly detects this fact and sends the request on to another appropriate server.

There are software-based load balancers and specialized hardware (like an ADC) that provide this capability. The hardware varieties are typically a bit more expensive but provide superior performance, greater capability, and increased savings of back-end server resources.

This article discusses some of the challenges related to properly load balancing Lotus iNotes and provides specific examples of the industry-leading F5 Networks BIG-IP Local Traffic Manager (LTM) Advanced ADC.

We also explore creating a general-purpose BIG-IP LTM configuration that might support most Lotus iNotes mail deployments, and then we consider what Lotus Domino server- and user-performance improvements might be realized by using a BIG-IP LTM.


Lotus Domino configurations for high availability

Lotus Domino mail files typically have a subdirectory (for example, mail) and a file name (for example, juser.nsf). For load balancing, the path for any mail file should be identical within each of the servers on which a replica resides, and the full path to the mail file must be unique (unambiguous) when it arrives at the load balancer.

Let's discuss a few configuration scenarios that can be used for high availability.

Configuration 1: Single mirrored cluster

This first configuration scenario is the least complex. It is a fully mirrored, clustered set of servers, and all mail files are located within this one cluster. Two or three servers house an identical directory structure and set of mail files, and cluster replication is enabled across these servers (see figure 1).

Figure 1. Two mirrored servers in Cluster A
Two mirrored servers in Cluster A

In this scenario, at the load balancer, any Lotus iNotes-related request can be handed off to any server in the cluster.

Configuration 2: Multiple mirrored clusters

The next level of sophistication is when there are multiple Lotus Domino mail clusters similar to those in Configuration 1, and the load balancer needs to dispatch a received request to the proper set of back-end servers (those in the same cluster).

There are several possible approaches to accomplishing this:

  • In the first, each cluster uses a uniquely named subdirectory in which the mail files are stored, meaning that the normal URLs contain a segment that clearly identifies the cluster (see figure 2). The rules at the load balancer are then updated to key off these names and associate the proper back-end servers.

    Figure 2. Two servers in two different clusters with unique subdirectory names per cluster
    Two servers in two different clusters with unique subdirectory names per cluster
    Such hard-coded lists of unique cluster subdirectory names and back-end Lotus Domino servers must be kept updated with any changes made to the server clusters.
  • In the second approach, the Lotus iNotes redirector application is configured to return the home mail server’s host-name portion of the server’s Lotus Domain Name System (DNS) as an additional segment in the URL generated to the mail file (just before the path). The load balancer then uses this additional segment to look up the Lotus Domino clusters and the associated back-end servers.

    One issue with this approach is that Lotus iNotes might generate subsequent URLs in which the segment name is no longer there. Hence, the cluster identifier must be stored within a cookie, so that the load balancer looks at this value if the segment is not present.
  • In the third scenario, a load-balancer assistance agent is added to the Lotus Domino server that the load balancer queries to dynamically generate the list of proper Lotus Domino servers containing a replica of this mail file (see figure 3).

    Figure 3. Two servers in different clusters
    Two servers in different clusters
    In this scenario, it is not necessary to hard-code specific cluster names or back-end Lotus Domino server IP addresses within the load balancer rules.

Configuration 3: Multiple non-mirrored clusters

The next level of complexity is the scenario in which there are multiple servers in a Lotus Domino cluster, but each mail file isn’t located on every server in that cluster. Instead, user mail files are sparsely distributed across a subset of servers within the cluster.

This scenario is probably the most complex configuration to properly support with respect to a load balancer, but it's a commonly deployed Lotus Domino configuration. Here are some possible approaches to accomplish this configuration:

  • Each unique combination of servers on which a particular mail file resides is reflected in some way within the unique subdirectory name where the mail file resides (see figure 4). This combination means that the normal URLs, in effect, identify the cluster, and the subset, of servers within the cluster on which the mail file resides.

    The load balancer uses this information to dispatch requests to the proper server subset.

    Figure 4. Three non-mirrored servers in Cluster A
    Three non-mirrored servers in Cluster A
  • In addition, a load-balancer assistance agent is added to the Lotus Domino server that the load balancer might query to dynamically generate the list of proper Lotus Domino servers containing a replica of this mail file (see figure 5).

    Figure 5. Three servers in Cluster A not mirrored with redirector
    Three servers in Cluster A not mirrored with redirector

Thus, as shown by our discussion of the preceding options, the most versatile solution involves augmenting the Lotus Domino server to communicate the information about which back-end servers can be used to create dynamic pools for load balancing the current received request, without using special path names to communicate the relevant failover information.

Now let's explore in greater detail the creation of such a general-purpose service for implementing this solution, which also requires that the load balancer is able to make such dynamic queries and use the results.

The F5 Networks BIG-IP LTM offers a sophisticated scripting language based on Tool Command Language (TCL) that can be used to create what’s called an iRule. An iRule uses an easy-to-learn scripting syntax that enables the BIG-IP LTM to customize how it intercepts, inspects, transforms, and directs inbound or outbound application traffic.

Using this methodology, the load-balancing environment has the intelligence to route HTTP traffic to the correct Lotus Domino server.


Creating a load-balancer assistance service

Lotus Domino stores information about unique NSF files within a particular cluster in the cluster database directory (cldbdir.nsf); however, this database is unique only within a particular cluster. The cluster directory database on a different cluster within the same domain contains quite different information.

One way to determine whether the unique NSF file is in a particular cluster is to examine the cluster directory on the server at which the request is initially received, to determine whether the path is found. If so, then all is well. The proper set of servers can be returned, and the user’s home server might be placed in front of the returned list, to give it preference.

If, however, the path is not in the cluster directory on the server, the load balancer must query a server in a different cluster. To help it send a request to the proper cluster, information is looked up within the Lotus Domino directory names.nsf, to locate the set of servers that are in the cluster where the mail file of the current user resides.

To do this query, the service looks up the home mail server for the current authenticated user and then looks up the ClusterName of which this server is a part. That cluster can then be looked up in the ($Clusters) view to obtain the servers that are part of that cluster.

We created the assistance service, using Notes Formula Language, and placed the key code within a form called ServersLookup in the Lotus iNotes redirector template. When requested by the load balancer, the ServersLookup form returns one of two HTTP response headers in the format X-Domino-xxxxx, each containing a comma-separated list of servers.

X-Domino-ReplicaServers is returned when the service finds the relevant path within its own cluster, whereas X-Domino-ClusterServers is returned only when the mail servers are part of a different cluster.

The ServersLookup form is provided with this article (see Appendix A). The performance explorations that follow did not use this assistance agent, exploring only Configuration 1.


Analyzing performance

The DWA85 workload was run against two clustered Lotus Domino mail servers, with replication between the servers. We ran with a total of 4000 concurrent users with 2000 active users on each server, with the Microsoft® Windows® 64-bit operating system and the 32-bit version of Lotus Domino 8.5.1.

All the tests were set up with 4000 users defined in each of the Lotus Domino directories on the servers. At the beginning of the test, each user had a mail file that was roughly 256 MB of uncompressed documents, with 3000 messages in the Inbox folder.

These tests have Lotus Domino transaction logging enabled with the Favor Runtime setting, and Mail Journaling is set to journal all messages locally. Domino Domain Monitoring (DDM) probes are enabled for messaging and Server Operating System, and all users have mail rules that block mail from 10 users external to the test.

With Lotus Domino 8.5.1, we enabled document compression on the mail databases, which reduced their size from about 250 MB to approximately 170 MB. In addition, we enabled the Domino Attachment and Object Store (DAOS) database property on some of the tests, after the mail databases were created, and we also enabled DAOS on the mail boxes and mail journal database.

In the first test, we ran the 4000 NotesBench DWA85 simulated users against the two Lotus Domino mail servers on the SSL port, without the BIG-IP LTM proxy between the clients and the servers. Each server had 2000 active DWA users in this test.

In the second test, we ran the 4000 NotesBench DWA85 simulated users, going through the BIG-IP LTM proxy, and accessed their mail servers. For this test, we disabled the response caching and GZIP compression on the Lotus Domino mail servers, using these Notes.ini settings:

Notes_wa_GZIP_Disable=1
HTTPDisableUrlCache=1

The SSL port was also disabled for the HTTP server in the Server document of the Lotus Domino directory. The BIG-IP LTM proxy server handled the caching, GZIP compression, and SSL encryption during these tests.

Tables 1-3 summarize all the hardware specifications.

Table 1. Hardware configuration for Server 1
HardwareSpecifications
ModelIntel 64-bit platform
Processors for test / speedIntel® Xeon MP configured as 2 quad-core processor /3.67 GHz
Memory8 GB
Active physical drives42
Active logical volumes3
Operating systemMicrosoft Windows 2003 X64
Lotus Domino versionsLotus Domino 8.5.1, 32-bit application
Notes.ini settings used when BIG-IP LTM offloads SSL/gzipiNotes_wa_GZIP_Disable=1
HTTPDisableUrlCache=1
Table 2. Hardware configuration for Server 2
HardwareSpecifications
ModelIntel 64-bit platform
Processors for test / speedIntel Xeon MP configured as 2 quad-core processor /3.06 GHz
Memory12 GB
Active physical drives42
Active logical volumes3
Operating systemMicrosoft Windows 2003 X64
Lotus Domino versionsLotus Domino 8.5.1, 32-bit application
Notes.ini settings used when BIG-IP LTM offloads SSL/gzipiNotes_wa_GZIP_Disable=1
HTTPDisableUrlCache=1
Table 3. Hardware configuration for Server 3
HardwareSpecifications
Model6900
2x dual-core
Memory8 GB
Active flash drives8 GB
Active hard drive2 x 320 GB
Operating system10.1

Figure 6 shows the baseline configuration.

Figure 6. Baseline configuration
Baseline configuration

Performance test configuration

We used two IBM 3850s with two 3.6 GHz Xeon processors with 8 GB of physical memory, each with a DS4300 with 42 fiber disks and a Microsoft Windows 2003 Server Enterprise 64-bit Edition operating system (see figure 7). The NotesBench load driver system was a Linux® server capable of handling up to 4000 DWA85 simulated users.

Figure 7. Performance test configuration with BIG-IP LTM 6900
Performance test configuration with BIG-IP LTM 6900

Performance settings

Here are the settings for the BIG-IP LTM:

  • SNAT: Set to Automap
  • Nagle: Disabled
  • SSL: Enabled
  • Send buffer: 262144
  • Recv window: 262144
  • HTTP Profile:
    • Defaults from http-wan-optimized-compression-caching
    • Compress: Enabled
    • Compress GZIP level: 5
    • RAM cache size: 200 MB

The Lotus Notes settings are:

  • SSL: Disabled
  • HTTP Caching: Disabled
  • GZIP: Disabled

Performance results

Table 4 shows the key values measured for the two configurations.

Table 4. Key results
Test BaselineBIP-IP LTMPercent improvement
Transactions/minute60286044-0.27%
Response time in seconds0.3380.08375.44%
Processor busy 29.420.928.91%
Disk I/O Ops/second525530-0.95%
Disk Kbytes/second440643082.22%

The values that showed any significant difference were the individual request response times and the processor busy level on the Lotus Domino servers. With the BIGIP ADC in place, the average response times were improved by 75 percent (see figure 8).

In other words, the response times going through the device were four times faster than when going directly to the Lotus Domino server. One reason for this improvement is that the ADC keeps persistent connections open to the Lotus Domino server and can more efficiently use these channels for subsequent requests.

Also, the processor-busy level on the Lotus Domino servers was 28 percent less than when an ADC was not used; lower processor-busy levels allow the Lotus Domino server to sustain a greater number of concurrent users.

Figure 8. Percent performance improvement
Percent performance improvement

NOTE: We also attempted to measure client response times while simulating a low-speed connection for some key interactions with the mail client. When bandwidth was the key limiting factor, we did not measure any significant improvement in the key user operations that we timed.


Conclusion

Achieving high availability for Lotus iNotes requires the use of a software load balancer or a hardware ADC in conjunction with Lotus Domino clustering, the topologies for which include mirrored configurations and sparse clusters.

It’s relatively simple to support fully mirrored clusters but more complex to properly use sparse clusters. We also introduced some new server-side logic to help with the sparse cluster topology.

Such sparse clusters do a better job of evenly spreading a load to the remaining servers in a cluster, when one of the servers in a cluster goes down, so that we needn't have a replica mail file on every server in a cluster for every user within the cluster.

The performance runs also revealed that there are significant processor savings on the Lotus Domino servers when an ADC is deployed. Moreover, the tests confirmed significant improvements in response time for various requests, which can improve user response times when they are connecting using reasonable bandwidth.

For organizations looking to adopt more Web-centric solutions, an investment in software load balancers or hardware ADC can significantly improve user response times and reduce the processor load on the fronted Lotus Domino mail servers.


Appendix A: ServersLookup form

The ServersLookup form has one "Computed for display" field called $$HTMLHead with a type of text. This field contains the formulas shown here. Debug statements have been added so that, if the form is opened manually from a browser, you can see the results from the formulas.

To open the form manually, issue the following request:

http://mail.acme.ibm.com/iwaredir.nsf/ServersLookup?OpenForm&nsfpath=mail\jsmith.nsf

tmpDebug := "";
tmpNSFPath := @ReplaceSubstring(@URLDecode
( "Domino"; @UrlQueryString("nsfpath") );"/";"\\");
tmpServers := @DbLookup( "":"" ; "":"cldbdir.nsf" ; "($Pathname)" ; 
tmpNSFPath; "CanonicalServername");
tmpServers:=@If(@IsError(tmpServers);"";tmpServers);
REM {Lookup home mail server };
tmpHomeServer:=@Name([Canonicalize];@NameLookup( [NoUpdate];
@UserName; "MailServer" ));
REM {Is Home Mail server in list of servers, then move this up to 
the front of the list};
tmpServers := @If(@IsMember(tmpHomeServer;tmpServers); 
tmpHomeServer : @Transform(tmpServers;"x";@If(x=tmpHomeServer;@Nothing;x))
;tmpServers);
tmpDebug := tmpDebug + "ReplicaServers:" + @Implode(tmpServers;",");
tmpDNSNames := "";
tmpClusterName := "";
tmpClusterServers := "";
REM {If no servers found, then db is in a different cluster, return list of cluster 
servers, with home server in front of list};
tmpServers := @If(tmpServers="" | @Elements(tmpServers)=0;
  @Do(
    tmpDebug := tmpDebug + "Looking for cluster servers;";
    tmpClusterName := @Subset(@DbLookup("":""; "":"names.nsf"; "($ServersLookup)"; 
    tmpHomeServer; "ClusterName"); 1);
    tmpClusterServers := @DbLookup( "":""; "":"names.nsf"; "($Clusters)"; 
    tmpClusterName; "$0");
  tmpClusterServers := @Transform(tmpClusterServers;"x";
@If(x=tmpHomeServer;@Nothing;@Name([Canonicalize];x)));
    tmpClusterServers := @If(@IsMember(tmpHomeServer;tmpClusterServers); 
    tmpHomeServer : @Transform(tmpClusterServers;"x";
@If(x=tmpHomeServer;@Nothing;x));tmpClusterServers);
    tmpClusterServers);
  tmpServers);
tmpLimit:=@Elements(tmpServers)+1;
@For(n:=1;
n<tmpLimit;
n:=n+1;
tmpHTTPHostNameALT:=@Subset(@DbLookup( "":"" ; "":"names.nsf" ; 
"($ServersLookup)" ; tmpServers[n] ; "HTTP_Hostname");1);
tmpServerFQDN:=@Subset(@DbLookup( "":"" ; "":"names.nsf" ; "($ServersLookup)" ; 
tmpServers[n] ; "SMTPFullHostDomain");1);
tmpString:=tmpString+@Text(n)+tmpHTTPHostNameAlt+tmpServerFQDN;
tmpDNSNames := @If(@Length(tmpDNSNames)>0;tmpDNSNames+",";"") + 
@LowerCase(@If (tmpHTTPHostNameALT!="";tmpHTTPHostNameALT;tmpServerFQDN))
);
@If(tmpClusterName="";@SetHTTPHeader("X-Domino-ReplicaServers";tmpDNSNames);
@SetHTTPHeader("X-Domino-ClusterServers";tmpDNSNames));
@SetHTTPHeader("Cache-control";"no-store");
@If(tmpDebug="";"";"<script>"+tmpDebug+"</script>")

Appendix B: Sample iRule

This iRule exercises Servers Lookup to help locate users mail file across the domain. DominoServers is the server pool we are using in are sample on Big IP device.

    ######
when CLIENT_ACCEPTED {
	#set the status - 'needs server' 1 or 0.
	log local0. "got initial connect - needs a lookup."
	set needs_server 0
}

when HTTP_REQUEST {
#capture original request - destined for a real server.
if { ([HTTP::uri]ends_with ".nsf") and not ([HTTP::uri] contains "names.nsf")}{
    set original_request [HTTP::request]
    set needs_server 1
    set nsf "[substr [HTTP::uri] 1 ".nsf"].nsf"
    HTTP::uri/iwaredir.nsf/ServersLookup?OpenForm&nsfpath=$nsf

   } else {
    set needs_server 0
}

#check to see if we need a server.Else, send to our dest. pool
  if { $needs_server == 1 } {
    #dummyServer is our ?mapping? server to query against. 
    It returns the header and its values.
      pool DominoServers
      } else {
      pool DominoServers
  }
}
when HTTP_RESPONSE {

  if { $needs_server == 1 } {
    set server_list [split [HTTP::headerX-Domino-ClusterServers], ,]
    HTTP::collect[HTTP::headerContent-Length]
    }
}

when HTTP_RESPONSE_DATA {
  foreach {svr} $server_list {
    if { "" ne $svr }{
      set dest [findclass [string trim $svr] ::NSREPLICASERVERS " "]
      log local0. "Servername is [string trim $svr]"
      log local0. "$dest"
      #TEST.ONE.TWO.COM 10.100.100.80:8080
      #set node_addr [getfield [findclass $svr domino-servers " "] ":" 1]
      #set node_port [getfield [findclass $svr domino-servers " "] ":" 2]
    
      log local0. "server is: $node_addr on $node_port...issuing HTTP::collect"
    if { [LB::status pool DominoServers member $dest 80 ] eq "up" } {
      log local0. "Selecting $node_addr:$node_port"
      pool IrisServers member $dest
      HTTP::retry$original_request
      break

    }
  }
}
  set needs_server 0
}
####

Acknowledgments

The authors wish to thank the following individuals who assisted with the high-availability explorations outlined in this article: John Fortier (IBM), James Powers (IBM), Mark Oslowski (IBM), Matt Cauthorn (F5 Networks), and John Alam (F5 Networks).

Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Select information in your profile (name, country/region, and company) is displayed to the public and will accompany any content you post. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into IBM collaboration and social software on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Lotus
ArticleID=486955
ArticleTitle=Achieving high availability with IBM Lotus iNotes
publish-date=05062010