Logging is a critical component in any application. Effectively managing Groovy logging in IBM® WebSphere® sMash can be a frustrating experience for the uninformed. This article explains how to properly establish a functional and flexible logging environment for your WebSphere sMash applications.

Share:

Karl Bishop (kfbishop@us.ibm.com), Senior Software Engineer, IBM

Photo: Karl BishopKarl Bishop is a Senior Software Engineer in IBM Software Services for WebSphere. He has been working with WebSphere sMash since it was first released. He is also a co-author of the book entitled Getting Started with IBM WebSphere sMash. His technical focus is on Web 2.0, with an emphasis on Dojo-based applications.



10 August 2011

Also available in Chinese

Overview

One area of WebSphere sMash that generates questions is how to set logging levels for Groovy classes in WebSphere sMash. This article will address the gray areas surrounding this topic and encourage others to extend their logging usage beyond just logger.INFO{} calls.

Note

WebSphere sMash is also available and known as "IBM WebSphere Application Server Feature Pack for Dynamic Scripting". This article uses the previous and friendlier product name, but the same concepts apply to both environments.

My current project utilizes WebSphere sMash as a data services layer. It provides access points for over fifty different data sources of various types. Most of these sources come from third party applications that we have no control over and it is common for one or more of them to be down, or to have data related problems. Logging plays a big part in our ability to monitor these resources. By effectively using the various logging levels provided by Java™ logging, including fine-grained control of setting the logging filtering for specific classes, our support team can quickly identify the root cause of problems.


Groovy logging in WebSphere sMash

Before going further, let's review how logging is done in Groovy within a WebSphere sMash environment. The underlying logging infrastructure is provided by Java logging. There are seven severity levels available for each log entry, which are shown here in descending order of importance: SEVERE, WARNING, INFO, CONFIG, FINE, FINER, and FINEST. There are also two other meta-levels: ALL (synonymous with FINEST) and OFF (disables logging).

Standard Groovy logging in WebSphere sMash is done through a method call to the matching logging level in a static class called a logger, passing in a closure that contains the information to be logged. An example of a logging call looks like this:

logger.INFO{ "This is my logging entry" }

This produces an entry into the log handlers that includes a timestamp, source package.class and method, and the logged message itself. There are three pre-defined log handlers in WebSphere sMash. Each of these handlers also has a default minimum log level setting, which are listed in Table 1.

Table 1. Predefined log handlers
Handler Default log level Outputs to:
ConsoleHandler INFO Standard Out
LogHandler INFO /logs/error-0.0.log
TraceHandler ALL /logs/trace-0.0.log

Given this default configuration, you can see that all logging of INFO and above goes to the standard log handler file, error-0.0.log, and that ALL log entries are written to the trace log file. The main problem we faced in our project is that we did not get enough information in the normal logs, and we received an overwhelming amount of output in the trace log. Neither was providing us with what we needed to support the application. Note that the majority of our application's logging entries are set to FINEST up to FINE.

The existing documentation on Project Zero web site implies that to alter the logging levels, you need to create entries in the /config/logging/levels map and restart the application. There is frequent frustration when this does not work, and the new level settings are apparently ignored. The solution to this conundrum is obvious once you understand how all the logging pieces fit together.


Setting up a Groovy logging environment

WebSphere sMash projects used to automatically provide a logging.properties file in the /config directory. For some reason, they stopped including this file by opting for a default implementation. There is confusion and frustration over logging that kicked into high gear because of this change. This file is critical for customizing your logging environment, so the first thing you need to do is add it back to the project. In this file, you define the logging handlers to be used and their default logging level. There are several more settings you can alter, such as the logging format and output file names, but those are not central to this discussion. The final item in this file is the top-most parent logger level. This is the default level applied to all classes unless a more specific filter rule is applied, which we will do in just a moment. Listing 1 shows the modified /config/logging.properties file.

Listing 1. Modified logging.properties file
//-- File: /config/logging.properties
.level = INFO

handlers = zero.core.logging.ConsoleHandler     zero.core.logging.LogHandler

zero.core.logging.ConsoleHandler.level           = ALL
zero.core.logging.ConsoleHandler.formatter       = zero.core.logging.LogFormatter
zero.core.logging.ConsoleHandler.exceptionFormat = NONE

zero.core.logging.LogHandler.level               = ALL
zero.core.logging.LogHandler.pattern             = logs/error-%u.%g.log
zero.core.logging.LogHandler.encoding            = UTF-8
zero.core.logging.LogHandler.limit               = 1048576
zero.core.logging.LogHandler.count               = 10
zero.core.logging.LogHandler.append              = true
zero.core.logging.LogHandler.formatter           = zero.core.logging.LogFormatter
zero.core.logging.LogHandler.exceptionFormat     = NONE

Most of these values are taken directly from the default WebSphere sMash logging.properties file, which is included in the Download section of the article. What you might notice is that we have completely removed the reference to TraceHandler. We do not want this output in our application. The other setting of interest is that the base logging level for each of the handlers is set to ALL. This provides the flexibility to assign any logging level to any class filter without external restrictions. You cannot set a logging level filter lower than the base level of a handler. The parent filter (.level) is set to INFO, so only INFO and above logging will be the output to the handlers by default. This is normally what you want. In theory, you do not need to touch this file again.

Note: The logging.properties file must be located in the top level project. If you have stacked dependency projects, having this file in a lower project will not be properly picked up and the application will default to the standard internal WebSphere sMash logging.properties file.

At this point, you want to add in some customization filter levels in the application configuration (/config/zero.config). The block shown in Listing 2 effectively does nothing at this point, but gives you a baseline for making quick changes to the filtering rules.

Listing 2. Default logging configuration setting
//-- File: /config/zero.config
/config/logging/levels += {
    "public"                      : "INFO",
    "app"                         : "INFO",
    "app.scripts.myapp.someclass" : "FINER"
}

With this configuration, you are redundantly stating that all logging events under the public and app trees have a logging level of INFO. You are adding a fictitious exception to a special class under /app/scripts to log all FINER and higher logging events. From here, you can blanket change all logging by altering the top two entries.

For our project's development workstations and development test server, we set the public and app entries to ALL. For our integration, staging, and up through production servers, we leave these settings at INFO unless there is an issue. We either make a blanket filter change, or add in a custom rule for the specific package or class under investigation.

This works quite well, but when making changes to the zero.config file, you must restart the application, which also means manually logging into the server. This is disruptive and time consuming, so to make life just a little easier, you can create some custom logging services to help with logging diagnosis.


Advanced logging filter manipulation

To change logging levels on the fly, you need a support class and front it with a REST service. The logging support class contains methods to view current filters, add or update a filter and its logging level, and test a given filter logging level. Most of the methods available in this class are trivial, so let's focus on the method that adds a new logger filter as shown in Listing 3.

Listing 3. Add a new logging filter
//-- File: /apps/scripts/ibm/kayman/admin/logging.groovy
import java.util.logging.Level
import java.util.logging.Logger

//-----------------------------------------------------------------------
// Adds/updates a logging filter to the specified level.
// Valid levels are: "OFF", "SEVERE", "WARNING", "INFO",
//                   "CONFIG", "FINE", "FINER", "FINEST", "ALL"
//------------------------------------------------------------------------
def setFilter( filter, level ) {
    def _logger = Logger.getLogger( filter,      
"zero.core.logging.messages")
    def ll = zget("/config/logging/levels")
    if ( ll == null ) {
        ll = [:]
    }
    if ( isValidLevel(level) ) {
        logger.INFO{"Setting ${filter} == ${level}"}
        _logger.setLevel( Level.parse(level) )
        ll.put(filter, level)
        zput("/config/logging/levels", ll)
        return ll
    } else {
      throw new Exception("Invalid level requested for filter 
${filter}: ${level}")
    }
}

To alter the logging filters, it is not just a matter of changing the config's levels map. While you still do this to maintain a simple reference of the current filters, these settings are only actually used at application launch. Rather, you must access a logger instance for the class in question. Once you have an instance of a logger for this class; or better, put a filter string, you can update it to the desired level. Getting a logger instance automatically creates a new logger for that class if one does not exist. Since you cannot actually delete a logger instance, you can disable it by setting its level to either OFF, or better yet, to the level of its next matching parent logger. The rest of this method is just updating your /config/logging/levels value to keep a logical reference to your defined filters.

The REST service that fronts this class has events for GET, GETs, POST, and DELETE. They are all similar so we will just show the POST entry in Listing 4. To prevent any pranksters, we strongly suggest that you protect this resource.

Listing 4. POST entry
//-- File: /apps/resources/kayman_logging.groovy
//---------------------------------------------------------------------------
//-- POST handler
//---------------------------------------------------------------------------
def onCreate() {
    def filter = zget("/request/params/filter")
    def level =  zget("/request/params/level") ?: "INFO"
    if ( filter ) {
        invokeMethod("ibm.kayman.admin.logging", "setFilter", filter, level)
    } else { 
        throw new Exception('Missing required filter parameter')
    }
}

When a POST call is made, as shown in Figure 1 using the Poster plugin, the new filter is added to the desired level.

Figure 1. POST call using Poster
POST call using Poster

Command line logging filter checks

As a final topic, we added a simple command line query command. This command line interface (CLI), located in /app/tasks/logging.groovy, enables you to view the existing logging filters, as well as perform a simple query on a class filter string.

Listing 5. Command line access into logging configuration
//-- File: /apps/tasks/logging.groovy
//---------------------------------------------------------------------------
// Task to set revision of Project 
//---------------------------------------------------------------------------
import zero.json.Json

//---------------------------------------------------------------------------
def onCliTask() {
    def args = zget("/event/args")
    if ( args ) {
        switch( args[0] ) {
        case "list":
    	       showFilters()
            break
        case "test":
            invokeMethod("ibm.kayman.admin.logging", "runTests", args[1])
    	       break
    	   default:
            syntax()
            break
        }
    } else {
        syntax()
    }
}

//---------------------------------------------------------------------------
def showFilters() {

System.out.println( zero.json.Json.encode( 
 invokeMethod("ibm.kayman.admin.logging", "getFilters", null), true) )
}

//---------------------------------------------------------------------------
def syntax() {
    System.out.println """
Syntax:
zero logging list  - Show current logging filters and their level
    test "filter"  - Run a simple test on the provided filter string 
     (eg "app.scripts.ibm.mytest")
             
"""
}

//---------------------------------------------------------------------------
// EOF
//---------------------------------------------------------------------------

Running these commands from the command line produces output similar to the one shown in Figure 2.

Figure 2. Results of running logging commands
Results of running logging commands

Assuming these CLI calls are run immediately after the Poster test above, where we added a new filter, you should see that filter in the list, right? The answer is no. CLI commands always run under their own process, so you only see the default entries defined in the /config/logger/levels setting. The final command run tests an arbitrary class name to determine its calculated logging level, which is then tested.


Conclusion

This article shed some light into the often murky understanding of how Groovy logging works within WebSphere sMash. WebSphere sMash and its alternate incarnation of the Dynamic Scripting Pack has much potential and is widely used within IBM to solve a myriad of application needs. It solves little problems quickly and easily, and can nicely scale to meet the needs of your company, including small to medium sized projects. Future WebSphere sMash articles will cover how to create a Dojo-based frontend application to manage logging levels, monitor WebSphere sMash applications and performance, and easily change config settings at deployment time to match specific server environments.


Download

DescriptionNameSize
Sample codeGroovy_Logging_Sample_Code.zip9KB

Resources

Learn

Get products and technologies

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into WebSphere on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=WebSphere
ArticleID=751531
ArticleTitle=Groovy logging in WebSphere sMash
publish-date=08102011