During investigation of some intermittent problems in one of our web applications, we observed some JSESSIONID cookie behavior that we couldn't explain, so I performed some investigation into the cookie's mechanics. I'll attempt to summarize in this article what I learned.
JSESSIONID Cookie Format
In a clustered environment, the JSESSIONID cookie is composed of the core Session ID and a few other components. Here's an example:
It's still unclear to me what the different values mean here, but searching our Proxy logs indicates that the vast majority of our Sessions, the Cache ID is 0001. For instance, a search of one day's log around 17:00 indicates the following number of cookies which contained a particular Cache ID:
For a particular Session ID, the Cache ID can definitely change mid-session, without any other changes. In particular, without the Clone/Partition ID changing. This does not indicate a switch to another Cluster member, but I don't know exactly what it does indicate. Based on the relative loads of our various systems, it seems possible that changing Cache IDs only occurs under heavy loads.
A Partition ID is appended to the cookie if memory-to-memory replication in peer-to-peer mode is utilized for Distributed Session management. Otherwise, a Clone ID is appended.
This will match one of the CloneID attributes in the Server elements within the web server's plugin-cfg.xml file. For instance 138888kcd in:
<Server CloneID="138888kcd" ConnectTimeout="10" Exte
If you use memory-to-memory Session replication, your cookies will contain Partition IDs rather than Clone IDs.
Partition IDs are similar in function to Clone IDs, but best I can tell there is no direct way to determine which values correspond to which cluster members. They are internally mapped by the WebSphere HAManager to specific Clone IDs, and that mapping is exchanged with the web server plug-in so that it can maintain Session affinity and find additional cluster members for failover. (The exchange takes place in private headers on each response from WAS back to the plug-in, and those headers are removed before the response is sent back to the client.)
Session Affinity and Failover
The Clone/Partition ID corresponds to whichever cluster member creates the Session, and the plug-in is responsible to send that Session to the same cluster member as long as it is available. From the Scalability Redbook:
Since the Servlet 2.3 specification, as implemented by WebSphere Application Server V5.0 and higher, only a single cluster member may control/access a given Session at a time. After a Session has been created, all following requests need to go to the same application server that created the Session. This Session affinity is provided by the plug-in.
If on a subsequent request the specified cluster member is unavailable, the plug-in will choose another cluster member and attempt to connect to that. If Distributed Sessions are configured, via database persistence or memory-to-memory replication, the Session will be resumed in-progress on that new member. If not, a new Session will be created and the user's progress will be lost.
If a new cluster member is able to resume the existing Session, it will append its own Clone/Partition ID to the existing JSESSIONID cookie. For instance:
Now the plug-in knows that 2 different cluster members could potentially service this Session. If the original member becomes available again, the Session will switch back to it.
Finally, note that according to the System Management Redbook:
WebSphere provides session affinity on a best-effort basis. There are narrow windows where session affinity fails. These windows are:
Referenced articles and Redbooks
Web Application Configuration
This is the basic annotation-based JAX-RS coding. See the below "Getting Started" article, Part 1 of the "RESTful Web services" article, and the Developing JAX-RS Web applications section of the WebSphere Feature Pack documentation.
It's a straightforward, portable way to define the URI Paths of your Resources, the HTTP Methods supported by them, the Parameters passed to them, the Content Types supplied by them, and more.
I've been trying to connect to a customer's Web Service (from within WebSphere 6.1, but that's a different topic) and having some problems getting the security to work (another different topic), so I've desperately wanted an https-capable traffic monitor/proxy.
After trying the one built into RAD (which successfully passes the https requests, but displays them as encrypted garbage), Apache tcpmon (no SSL), membrane-monitor (seemed to not respond at all and I couldn't tell why), and Wireshark (looked too tedious to get working with SSL), I eventually got soapUI's HTTP Monitor working, although not without some non-obvious setup there either. So I'm going to document that setup here.
Trust the Service Endpoint
First, you'll need a jks TrustStore where you can place the public key of the server which hosts the https-protected Service you're going to call. You can use a tool like the GUI ikeyman, included with WebSphere, or the command-line keytool, included with your JRE, to do this.
Launch the HTTP Monitor
Next, it seemed fairly obvious what to do in soapUI. Right-click on a project and "Launch HTTP Monitor". You're presented with the following dialog:
To do https/SSL, you need to select HTTP Tunnel rather than HTTP Proxy. Set the Port to the local port your client is going to hit, and the endpoint to the remote service URL you're ultimately calling.
Then go to the Security tab:
Initially, all I had was the aforementioned TrustStore configured here in the HTTP tunnel - TrustStore and TrustStore Password fields. This allows the HTTP Monitor to talk SSL to the remote Service.
However, when I started the monitor with those settings, I received a cryptic dialog, "Error starting monitor: C:\Program File
I suspected Windows authority problems or path problems, but nothing seemed to resolve those, so I tried running the soapui.bat script from a command-prompt rather than the soapUI desktop shortcut, and then I saw the following prompts:
Ahah! Google does find something relevant there.
HTTP Monitor Server Certificate
HTTP Monitor needs its own SSL certificate (private key) and KeyStore in order to also talk SSL to your client application, which I should have realized. Then, of course, your client environment will have to import the public portion of that certificate into its TrustStore as well. But the less obvious detail is that this private key must use the alias "jetty" (as soapUI is using the Jetty server under the covers).
So again using keytool (or ikeyman), this time to generate a private/public key certificate pair with the alias "jetty", create the KeyStore that is referenced in the HTTP tunnel - KeyStore, Password, and KeyPassword fields. (Note the two properties listed in the prompts above.) Here's the keytool command I actually used:
jre\bin\keytool -genkeypair -alias jetty -keystore soapui.jks
Now complete the remaining Security fields and click "OK":
You'll have an SSL Tunnel listening on your local port, and you'll be able to see SOAP requests and responses sent through that local URL.
This post is intended to document procedures which can be used to simplify and increase security of remote login procedures to UNIX systems through use of SSH private and public keys.
PuTTY for Windows
Creating an SSH private key
PuTTYgen is used for this step. Use the "Generate" button and follow the instructions.
You can choose RSA or DSA key types, and you can change the key size.
Enter a passphrase and "Save private key" somewhere on your local system. Note: Use a passphrase that you can remember but that is stronger than a normal password. Security people usually suggest using whole sentences or combinations of words/phrases. Later we'll configure another program so that you don't have to type this passphrase very often.
Installing the public key on the remote system
There are various ways to do this, but the PuTTYgen window explains what I've found to be the easiest. That is, copy the text from the Key text area at the top of the window and manually add it to the $HOM
The authorized_keys file probably doesn't yet exist, and the .ssh directory may or may not. (The same directory is where ssh places the known_hosts file that contains the public keys for hosts which you have trusted for ssh connections in the past.)
If either doesn't exist and has to be created, ensure that the permissions are as follows
Each line in authorized_keys can also be configured with further options, including restricting a key's use to specific hosts, for instance. The best documentation I found on those options is at this Free BSD man page.
You can, and probably will want to, install the exact same public key on each system on which you want to use key-based authentication.
Using the key pair to login with puTTY
This is the "manual" approach, which isn't necessary if you follow the next step, but I wanted to document it for completeness. Here we explicitly tell a PuTTY session that we'll be authenticating with the private key file we saved earlier.
If you use this approach, when you connect to the remote system, you'll be prompted to enter your private key's passphrase:
You could argue that's actually worse usability than the user ID/password solution since you'll be typing a much longer passphrase, which is where the next step comes in handy.
Automatic key usage with Pageant
Another of the PuTTY programs, Pageant, can act as an agent providing access to private keys and only requiring you to authenticate once for each key.
Once you've added this key file to Pageant and entered your passphrase there, you can leave Pageant running, and all the PuTTY programs will be able to authenticate with your key without your further involvement. In fact, with Pageant running, any attempts to connect to a host which trusts your key will automatically connect even if you haven't explicitly configured your PuTTY session to use key authentication. (See the default, enabled "Attempt authentication using Pageant" checkbox in the above PuTTY screenshot.)
Furthermore, the Pageant tray icon can actually be used to directly launch any saved PuTTY sessions you've created. Right-click on the tray icon and select "Saved Sessions".
Finally, you can use the command-line to pass to Pageant any private keys you want it to automatically load when it starts. This allows you to create a shortcut icon that will prompt you for the necessary passphrases and then start a copy of Pageant ready to be used for subsequent, key-authenticated SSH sessions:
Because I investigated this for a project, Linux (at least RHEL) mailx can use a remote SMTP server, thus enabling us to test whether that server allows email sending from our application.
mailx -S smtp
Where "-S smtp <smt
-v is "verbose"
body.txt is a local file that contains the body of the test message I'm sending.
The test was initially failing like this:
Resolving host <server-address> . . . done.
And success (eventually) looked like this:
Resolving host <smt
Following up on Part 1, here's an additional tip which I use frequently. That is, when I need to tunnel SSH through one machine to reach others, using a background proxy with SSH key authentication for the initial connection simplifies this 2-hop process.
Automatic proxying with Plink
The PuTTY installation also includes a command-line SSH program called Plink which can be used in a "background" mode. The PuTTY help describes how to use Plink as a local proxy program, creating a background tunnel for the main PuTTY window. This configuration is performed in the Window > Proxy tab:
Specify a Proxy type of local, and the standard SSH Port, 22.
In the Proxy hostname field, you enter a host to which you have direct access and on which you've configured key authentication, then you refer to that host via the %proxyhost variable in the plink command you provide as the local proxy:
\path\to\plink -l %user %proxyhost -nc %host:%port
The %host and %port variables represent the ultimate destination Host Name and Port fields from the main PuTTY Session tab (which, as usual, you can enter as needed or save under separate Sessions for each server).
%user and %proxyhost are from this configuration page.
Note: If your Default Settings PuTTY profile has a username (on the Connection > Data tab) or hostname configured in it, plink will use those automatically. Discovering that wasted a couple of hours for myself and a colleague. On the other hand, if the configured default username matches your username on the proxy server, you can completely omit the -l parameter from the plink command.
Tunneling additional Ports
Furthermore, this technique can be used in conjunction with "normal" SSH tunneling (Connection > SSH > Tunnels) in order to tunnel a localhost port through both hops. For instance, to tunnel the default WebSphere Application Server administration console port:
Then you connect your browser to http
You can similarly 2-hop tunnel X-Windows by enabling X11 Forwarding on PuTTY's Connection > SSH > X11 tab. (You'll need Windows X server like XMing.)
Any additional tips you've found useful? Better ways to accomplish these same tasks? I welcome your comments and suggestions.
I sometimes need to set a custom JVM Property for one or more Application Servers and dislike the tedium of setting them one-at-a-time through the console.
Here's a Jython script for wsadmin to add a JVM Custom Property to a specified Application Server, or to "all" application servers. It will also replace an existing property with a new value and description. Usage:
Update: the above only obtains the heap size if a custom value has been explicitly configured. From this other post, here's a mechanism which will obtain the maximum heap size otherwise (note there doesn't appear to be an attribute to obtain the minimum heap size):
I'm sure this information is in hundreds or thousands of other places, but as I had written it up for an internal team Wiki, I thought I'd repost here as well.
Perl is included by default in AIX, but most of the Modules are not, so the rest of this article deals with installing those.
Some Perl scripts make use of additional Modules, and there are Modules with many different capabilities, so be sure to look for existing Modules if you need to do anything that someone else might have had to do already.
Obtaining a Module
For instance, a script we created uses the Date::Simple module. After finding its page, you click the Download link on the right side to obtain what is actually a source package in .tar.gz format.
Module Installation from Source
These instructions are summarized from somewhere, I thought an earlier version of the generic installation instructions on the CPAN site. However, that site now describes a simpler approach using a cpanminus installation module. I'll have to try that out.
Decompress the file
This can be done as any user.
gunzip -c Date
Build the module
These steps can also be performed as any user.
By default, the Makefile will attempt to create an "XS" version of the Module, which means a version that uses native C code to improve performance. This means the "make" command will require access to a local C compiler.
If you don't have a C compiler or want to create a "pure-Perl" Module to copy around to various systems, you use the "noxs" option. The output of "perl Makefile.PL" will actually tell you to use this option if you see errors during the "make" step. If that occurs, the error will look something like:
cc_r -c -D_ALL_SOURCE -D_ANSI_C_SOURCE -D_POSIX_SOURCE -qmaxmem=-1 -qnoansialias
Install the module
This step needs to be performed as a user that can write to the Perl installation direction. Most likely "root".
Also watch for file permissions. By default this could make the module directory, subdirectories, and files be only readable by root.
DougBreaux 0100002GMN Visits (8867)
Each user on the system will have its own keystore where its public and private keys are stored. This keystore can be protected by a separate password or can be synchronized such that it's protected by the normal login password. I believe this is a system-wide setting, and the default is to synchronize the passwords. User keystores are located in /var
If a user is explicitly granted access to act as a specific group in relation to EFS (in addition to being assigned to that group at the OS level), the group's private and public keys will also be copied into the user's keystore.
Groups also have their own keystores, located in /var
Each file is encrypted with a unique, symmetric (AES) key. For each user or group that is authorized to view the encrypted file, the symmetric key is then itself encrypted with the user/group's public key from its keystore, and that user-specific encrypted version of the key is stored in the file's extended attributes (EAs). That is, there will be one EA for each user and group that has access to the file in question. (See section 2.4 of the AIX V6 Advanced Security Redbook.)
Requirements & Constraints
Tips & Specific Commands
Note: based on this article which I've only recently seen and haven't yet fully parsed, those last two items may no longer be true starting with AIX 6.1 TL4.
HTML comments are not displayed by a user's browser, but they're available in the source view. JSP comments are removed before the source HTML is returned to the browser.
In many cases where we use comments in our JSPs, we prefer they not appear in the browser-viewable page source. Best case, they're confusing and useless to anybody who looks at them. Worst case, they actually reveal something about our implementation which we'd rather not reveal.
So without further ado:
<!-- HTML Comment -->
Since I can tell that I don't fully understand this myself, I'll start by pointing to these articles which more fully & authoritatively describe SPF's purpose, behavior, and effects:
The first concept to understand is that SMTP email contains an "envelope" with an "envelope sender address" (sometimes called the "return-path" and technically the SMTP MAIL FROM command) that is separate from the "From" header in the message itself (a.k.a. the "header sender address").
The envelope sender address is what actually receives bounces, BTW. It's sent, and potentially checked, before the email message itself (including the header) is even delivered. The header sender address is what a user's email client typically displays, and it obviously doesn't always match the domain of the SMTP server which actually originated the message.
Sender Policy Framework
SPF, then, is an attempt to prevent forging of the envelope sender, not the header sender.
SPF is a combination of "sender" and "receiver" actions. The domain owner publishes a special kind of DNS record indicating which servers are authorized to send emails from its users. The receivers then verify that the IP address from which the email originated is in that authorized list for the domain.
So the purpose of SPF isn't to allow anotherdomain.com servers to send email on behalf of somedomain.com, the original requirement I was investigating; that can be done without SPF. Its purpose is the opposite: to tell receivers (who check) not to accept email which claims to come from somedomain.com (that is, with an envelope sender address of somedomain.com) unless the actual originating server's IP address is one listed as authorized to do so.
The primary goal seems to be reducing the amount of SPAM claiming to originate at somedomain.com.
Sender Addresses in JavaMail
Note that JavaMail by default uses the header sender ("From" address) also as the envelope sender address, but this doesn't have to be the case.
Potential Problems with SPF
The main potential problem I see is that email forwarding services typically don't work with SPF. That is, if a server publishes a restrictive SPF record, an individual receiving emails from that sender provides an email address that is actually a forwarding address, and the eventual destination of that forwarding address checks SPF records, the email will be rejected.
Since the sending server typically cannot control what kind of email addresses its customers provide, this seems like an unacceptable situation to me. Undoubtedly some end-users will provide email addresses which will be unable to receive email from a domain with strict SPF settings, and they'd never know they had done so. (Although the sending server should receive bounce emails indicating SPF failure in those cases.)
For those curious but not curious enough to work through the linked articles.
SPF is implemented by a domain adding a special kind of DNS record to its DNS server. (Apparently this can be either a TXT record or a SPF record, with the same content/format either way.) I won't go into the details documented elsewhere, but a simple example would be:
somedomain.com. IN TXT "v=spf1 a mx -all"
Meaning, for email with an envelope sender address in somedomain.com, only accept the email if the sending IP is either one of the domain's DNS "A" servers (the main DNS record for a domain) or one of its DNS "MX" (mail) servers. Consider "all" other IP addresses a "hard FAIL" (i.e. reject them if you care about such things).
I looked up ibm.com and saw that it actually has an SPF record that says no servers at all should be sending email for @ibm.com:
ibm.com. IN TXT "v=spf1 -all"
I had to ponder for a bit before I thought to check us.ibm.com. It has several valid senders listed, including some IP ranges:
us.ibm.com. IN TXT "v=spf1 ip4:22.214.171.124/24 ip4:126.96.36.199/24 a:d2
Hard and Soft Fail
Note the difference in the "all" clauses in the previous two examples. "-all" (hyphen) means to treat emails from all (other) senders as a "hard FAIL". "~all" (tilde) means to treat them as a "SOFTFAIL". Note that Google.com's SPF record and Microsoft.com's SPF record also use the SOFTFAIL default.
SOFTFAIL is theoretically intended as a transitional setting while you're testing out the implications of your SPF record/policy, with the recommendation that recipients accept the email but "mark" them in some way. But based on the noted use of that SOFTFAIL indicator by those large, savvy domains, I'd say that Hard FAILS probably reject too many real-world uses today. (I suspect the forwarding scenario described above as one of the main such problematic uses.)
DougBreaux 0100002GMN Visits (6302)
After attending a webcast on the WebSphere Application Server 6.1 Plug-in, I thought I'd summarize the content I found helpful.
The plugin is a native module (DLL or .so) that is installed into the Web Server and is offered each HTTP request to determine whether WAS should handle it. It makes this determination based on the contents of the plugin-cfg.xml file which is normally auto-generated by WAS.
I say, "normally", because it can be manually edited, but doesn't need to be for most operating scenarios. Note that many of these customizations can now be configured within the Administrative Console as well, which significantly reduces the need for manual file editing. This configuration is located under Servers -> Web Servers -> <Server Instance> -> Plug-in Properties. (Visiting this location in the Console is also a useful way to to locate the active plugin configuration file.)
The configuration file specifies virtual hosts (IP names/addresses and ports) and URL paths which are serviced by WAS applications, enabling the plugin to determine which requests it can route to WAS and which it must leave for the HTTP server to handle.
The file also contains values governing performance, maintenance, and failover, such as logging configuration and network timeouts.
Finally, it contains the "transport" information used to forward matched requests to the appropriate Web Containers in WAS. That is, the IP name and port combinations of the different Cluster members' Web Container "Transport Chains".
"Affinity" is the notion of sending a user to the same Cluster member (application server JVM) for each subsequent request after the initial one, if the application uses HTTP Sessions. This behavior is required by the JEE specification, and in WebSphere is the responsibility of the Web Server plugin. WebSphere maintains this affinity by appending a "Clone ID" or "Partition ID" to the "Session ID", which the plugin then compares against its configuration to correctly route subsequent requests.
For further detail on the Session ID mechanics, see this post.
New users, those without an existing Session, are distributed to Cluster members in either a weighted round-robin fashion or a random fashion. Round-robin is the default, and it will decrement each cluster member's "weight" each time it sends it a new request, until all members are less than zero, then they will all be reset to their starting values.
Tips or Items of Note
DougBreaux 0100002GMN Visits (5682)
Sometimes you want to test quick changes to single files deployed on WebSphere without having to upload a whole new EAR, WAR, or even individual file. (I wish the individual file upload process was simpler and safer.)
If you have command-line access to the server, you can edit or upload many of those files in place under their deploy location, typically something like:
However, for web.xml, the copy that resides under this location does not actually make use of your changes. Instead, the copy under this location does:
DougBreaux 0100002GMN Visits (5601)
Several times we've needed to retain output files which are a mix of static HTML and dynamic data.
In some of those cases, we've wanted to capture the same HTML (or a subset of it) which was returned to a user's browser as a normal HTTP response. While that's made me wish for a programmatic way to call the JSP engine, we ended up using a custom Filter to just capture the HTML on the way out, and that's worked well enough.
However, we now have a desire to capture merged HTML into standalone files that are not related to HTTP request or response processing. So rather than hardcode the HTML creation, I decided to start using one of the existing template engines. Velocity, FreeMarker, and StringTemplate are 3 of the most used. One of my main requirements was that the template files be true HTML that would syntax-check successfully in an IDE or code editor.
Without going into how I selected (it's not real concrete, but there are several comparisons on StackOverflow), I decided to further pursue both Velocity and StringTemplate, so I wanted to start with a generic interface, using their implementations to help me decide which of the two I'd use. Remember, we're not using a template engine to replace our JSP/JSTL "view" rendering, just for merging static & dynamic data to create a handful of specific files we need to retain.
In the end, I had to choose, and for various reasons I chose Velocity. Here are both versions, although the StringTemplate version is a little less capable because I stopped working with it after I chose Velocity.
One of the reasons I chose Velocity was that I could use the existing VelocityTools extensions to, for instance, format Date values in different ways, so you can see my use of the DateTool object in this implementation.