With IBM WebSphere Application Server Versions 5.x and 6.x, you could either manage fully independent base application server nodes or use WebSphere Application Server Network Deployment to manage nodes in a tightly-coupled, synchronous manner. The base application server node is easy to set up and manage for a single server environment. The Network Deployment topology provides additional qualities of service such as high availability, scalability, and data replication, plus it provides central management capabilities for nodes federated with the deployment manager. As more and more environments were built using the base and Network Deployment topologies, the following issues began to surface:
- How do you manage base nodes remotely?
- How do you manage a large number of base nodes?
- How do you manage multiple Network Deployment cells?
- How do you manage base nodes or Network Deployment cells that are geographically diverse, and connected possibly through a high latency, low bandwidth network?
WebSphere Application Server V7 addresses these issues by providing the capability to manage deployment manager cells and standalone base nodes using two new topologies:
- Administrative agent topology.
- Job manager topology.
The administrative agent topology provides the capability to remotely manage multiple base nodes on the same computer system with a single management server. When combined with the job manager topology, a large number of base nodes can be remotely and centrally managed. The job manager topology also provides the capability to manage multiple deployment managers and their cells. The job manager replaces direct management of the base nodes or Network Deployment cells. You can manage your existing environment with APIs and commands at the same time that it is being managed through the job manager. The job manager also provides a new asynchronous job submissions framework for administration. This combination enables the base nodes or Network Deployment cells to be located geographically distant from the job manager itself.
The new topologies both complement and extend the base and deployment manager topologies by creating a new flexible management capability. If you are happy with the existing capabilities of WebSphere Application Server base or Network Deployment, no change is needed as you move onto V7. On the other hand, if you are running into the above issues, you can create the new administrative agent or job manager topologies based on your existing base or Network Deployment topologies.
After reviewing the base topology, this article moves on to the administrative agent topology, showing how it supports remote management of base nodes on the same system. The Network Deployment topology will then be reviewed, followed by the job manager topology and how it can enhance your management capabilities. The article concludes with security considerations for the agent and job manager topologies.
WebSphere Application Server and WebSphere Application Server â Express products (hereafter referred to collectively as âbase application serverâ) provide the basic environment for running and managing applications. The management domain or cell of a base application server contains a single node and a single application server. This single application server is responsible for hosting applications and all the administrative services necessary to manage those applications and their environment.
Base application server environments are useful when quality of service needs are modest and the number of servers needed to handle the application demand is manageable. For example, a base application server environment might be created to perform proof of concept analysis, serve a few applications in a small, single computer environment, or support a unit test environment in development. Multiple base application server environments can be created to handle more capacity, but as the number of environments in this topology increases, operational complexity increases, and management efficiency decreases.
As application serving demands increase, a few limitations with the base application server environment emerge. When multiple base application server node profiles are created on a single system, administrative services are duplicated with each profile, increasing physical memory requirements and slowing initialization processes. Because each profile can only manage a single application server, each server must be managed completely independently from the others. Because the administrative processes run on the same server that hosts applications, the server cannot be recycled remotely. Stopping the application server stops the administrative services as well, so the server must be restarted locally. Finally, a management infrastructure is needed to manage a large number of base application servers.
Figure 1. Base application server topology
A base application server topology is created by installing the product image and then using the Profile Management Tool (PMT) or the manageprofiles command to create an application server node profile. The base application server can be managed using the wsadmin tool and, if installed, the administrative console. More than one base application server profile can be created on a single operating system image; however, each profile is managed independently from the others.
The administrative agent, a new management topology in V7, is designed to remove limitations of the base topology. The administrative agent topology separates administrative processes from application serving (Figure 2). This topology provides these benefits:
- Application servers are used only for running applications.
- Multiple application servers on a given node can be managed via a single administrative process.
- Multiple nodes on the same computing device can be registered with an administrative agent with management functions isolated within registered nodes.
- System resources are used more efficiently because administrative processes are not duplicated for each application server.
- Nodes registered with an administrative agent can become part of a job management topology, enabling the management of a large number of base application servers. (see Job Management Topology below).
The administrative agent topology is built from an administrative agent node and one or more application server nodes on a single system (Figure 2). Registering application server nodes with an administrative agent enables them to be managed from a single point of administrative control using the administrative console and wsadmin scripting tools. The administrative agent topology uses resources more efficiently because administrative functionality is delegated from the application server node to the administrative agent. Each additional application server added to a system no longer needs to duplicate the entire administrative infrastructure (Figure 3). Be aware that the administrative agent does not provide clustering, failover, or multiple node capabilities. These features remain available in WebSphere Application Server Network Deployment.
Figure 2. An administrative agent topology
As with base application servers and Network Deployment cells, you have the option of using either the administrative console or wsadmin scripts to manage the administrative agent and the nodes registered with it. When you log into the administrative agent console, you specify whether you want to manage the administrative agent or any nodes registered with it.
Figure 3. Administrative agent and base application server runtimes
To set up an administrative agent topology:
- Install WebSphere Application Server V7.
- Create administrative agent and application server profiles using the Profile Management Tool.
- Register the base application server profile(s) with the administrative agent.
If you are familiar with node federation in the deployment manager topology, you might have noticed similarities between administrative agent registration and deployment manager federation. Although these two processes are similar, it is important to note a few differences:
- When a node is federated with a deployment manager, the base node's configuration is first backed up, then replaced with a new configuration containing the node agent.
- When a node is registered with an administrative agent, the base node's configuration is not backed up, and only minor changes are made to its configuration. The console and management EJB (MEJB) applications are disabled and the base configuration is updated with information that points to the administrative agent. For each base node registered with the administrative agent, a new managed node configuration object is created. Each managed node has a name, unique within the administrative agent. After registration, the base node's wsadmin.properties file is updated with the administrative agent's port so that future wsadmin commands are directed to the administrative agent rather than to the base application server.
Furthermore, there are differences in how nodes are removed from a deployment manager cell and how they are unregistered from an administrative agent:
- When a node is removed from the deployment manager, the configuration that was backed up when it was federated is restored.
- When a node is deregistered from the administrative agent, it retains all configuration changes. This enables the administrator to continue to use the base node without the ability to manage it with the administrative agent.
After a base node is removed from a deployment manager federation, it can be registered with an administrative agent. Conversely, after it is deregistered from an administrative agent, it can be federated with a deployment manager.
If no node has been registered, the administrative agent console login page will appear, similar to that of the base or Network Deployment environment. If one or more nodes have been registered, the administrative console will first display a page for specifying the node to be administered. Upon selecting a node and clicking continue, a login page will display (Figure 4).
Figure 4. Selecting target node and logging in to the administrative agent console
The experience of logging into one of the registered nodes is very similar to that of logging into a base server console. The only difference is that you can now create, modify, start, or stop more than one application server per node.
For each node registered with the administrative agent, the agent creates an administrative subsystem for the node. Administrative subsystems are isolated from each other to prevent interference between the administration of different base nodes. To accomplish this, each administrative subsystem:
- is a self contained copy of the administration runtime used to manage a base node.
- is accessed through a different set of JMX connector ports.
- uses the configuration files of the base node that it manages. Consequently, each registered base node might have different user registries configured if security is enabled.
Figure 5. Administrative subsystems
The entire administration runtime is available in a base application server environment. After registering with the administrative agent, most of the administration function is assumed by the agent. Besides the administrative console, the administrative agent also assumes the functions of application management, configuration service, and administrative commands, corresponding to AdminApp, AdminConfig, and AdminTask from wsadmin.
In order to preserve the administrative programming model, where access to administrative functions are available locally in the application server, proxies are created in the base application server after registration. These proxies support the same APIs as those for AppMgmt MBean, ConfigService MBean, and RemoteCommandMgr MBean. Calls to the proxies are re-routed to the administrative subsystem. The proxies also register to listen and re-emit notifications from the MBean implementations.
If you use wsadmin, you should configure it to use the administrative subsystemâs connector port. Although wsadmin can operate with the base application server port, this configuration adds unnecessary roundtrip delays, CPU utilization, and memory load, as JMX calls are routed between the application server and the administrative agent, then back to the application server.
Figure 6. Application server proxies
Figure 7. zOS cross-LPAR administrative agent topology
The administrative agent can manage multiple registered base application server nodes on the same LPAR where the administrative agent is running. Taking advantage of some z/OS® platform-specific capabilities, it can also manage registered base application server nodes on other LPARs within the same sysplex. The sysplex needs to be setup to use a shared HFS between the LPARs being managed, such that the administrative agent can access and manage the configuration files of all those base application server nodes. The administrative agent also takes advantage of the MVS Route command to be able to start or stop registered base nodes across the LPARs of the same sysplex. The monitoring of registered base nodes is achieved through the use of cross-coupling facility (XCF) software. The administrative agent and its registered nodes join a XCF group that notifies group members when other members join or leave the group.
WebSphere Application Server Network Deployment has been available since Version 5. Network Deployment makes it possible to manage multiple application server nodes (usually distributed across different computer systems) from a central point of control. The Network Deployment cell consists of the management server, called the deployment manager, and one or more application server nodes that are federated into the environment.
Figure 8. A Network Deployment topology
Because the Network Deployment topology is well known, only a cursory description is provided here.
To set up a Network Deployment topology:
- Install WebSphere Application Server Network Deployment V7 on a computer system.
- Create a deployment manager profile.
- Create distributed application servers. If you want application servers on additional systems, install WebSphere Application Server on each system, then use either the PMT or the manageprofiles utility to create one or more application server profiles on each system.
- Federate application server profiles with the deployment manager.
Once you have a Network Deployment cell set up, you can use the administrative console or wsadmin scripts to manage multiple distributed base application server nodes from a single point of control. Once a base node has been federated with a deployment manager, it should only be administered through the deployment manager. If you have multiple Network Deployment cells, the deployment manager administrative tools can only be used with a single cell at a time. For example, if you have two cells, you must log into each one separately in order to manage it using the administrative console.
The job management topology is new in WebSphere Application Server V7. This topology, based on the use of a new job manager profile, supports a new style of distributed management of WebSphere environments. It will be helpful to describe job management by comparing and contrasting it with the management of a Network Deployment cell.
In a Network Deployment environment, configuration information for the various nodes, servers, applications, and resources is centralized within the deployment manager. The deployment manager administrative console provides views of configuration information and status for the infrastructure resources of the cell. Configuration changes made through these views are updated in the deployment manager configuration repository and then distributed to federated nodes through the node synchronization process. Synchronization of configuration information across nodes is inherent in this style of management and can occur at the time the configuration change is made on the deployment manager or can be rolled out at a later time. In either case, it is controlled at the deployment manager. Actions invoked through these views are communicated by the deployment manager to the node agent of the corresponding application server node. Operations are generally performed synchronously and are only possible if the node agent is available. When the node agent is unavailable, administrators are unable to invoke any operations for that node. Hence, it can be thought of as a tightly-coupled management system.
In contrast, a job manager supports a loosely-coupled, asynchronous style of management. A job manager doesnât maintain a central repository of configuration information for distributed resources. Nor does its console provide a view of configuration settings or current resource status. Instead, a job manager maintains a repository of management operations for distributed resources that have registered with it. Once registered with a job manager, each base node or deployment manager is a âmanaged nodeâ on that job manager. The job manager is used to submit administrative operations, called âjobs,â to targeted managed nodes. The jobs may be submitted as available immediately, at a scheduled time and date, or on a repeating basis.
The deployment manager and the job manager also differ in center of control. In Network Deployment topologies, the deployment manager is the center of control; administrative actions initiated on the deployment manager are pushed out to federated application server nodes. In a job management topology, control resides with registered nodes. Jobs are queued on the job manager and registered nodes periodically check for available jobs in the queue. If a node finds an available job for it, it pulls it down and runs it.
A job manager can be used to administer multiple base application servers and deployment manager cells. In both cases, job management is an adjunct to other styles of management. For example, in a job management environment, such as that shown in Figure 8, jobs can be submitted through the job manager administrative console or job manager wsadmin tool to base nodes that are registered with an administrative agent and deployment manager cells. However, the deployment manager cells and base nodes can also be managed through their respective consoles. They can also be registered with and managed by more than one job manager.
Figure 9. A job management topology
This job-oriented style of management is useful in many different scenarios. However, it is especially valuable in these environments:
Branch offices: Business enterprises such as retail stores typically have a central data center, often located close to corporate headquarters, and hundreds or thousands of geographically distributed stores or branches. Branches often have a few standalone application servers or a small Network Deployment cell. This small, branch IT environment supports daily operations and is managed locally. Branch IT environments are also usually connected to the corporate data center, which could be thousands of miles away from some branches. The communications channels could be low-bandwidth and relatively unreliable. The asynchronous, schedule-based, scaleable qualities of job-oriented management are well suited for performing periodic administrative operations from the central data center out to the branches.
Figure 10. Geographically dispersed branch offices
Server farms: Server farms consist of large numbers of low-cost machines running identically configured application servers or clusters. Configuring each server individually is impractical, and these large scale environments might be problematic for synchronous distributed management. Job scheduling capabilities reduce issues with network bandwidth.
Multiple Network Deployment cells: Many IT environments support multiple Network Deployment cells. The V7 job manager can target jobs for deployment managers and also provides the capability to define job for specific resources within cells. Job manager therefore provides a single point of administrative control for these environments. Sometimes these cells are geographically distributed and the job managerâs ability to deal with network latency is well suited for these scenarios.
Setting up a job management topology is a simple matter of creating a job manager profile and then registering application server nodes and deployment manager nodes with it. In order for an application server node to be registered with a job manager, the application server must first be registered with an administrative agent on the same system. Base application server nodes that arenât registered with an administrative agent are unable to access jobs from a job manager.
- Install WebSphere Application Server Network Deployment V7 on a system.
- Create a job manager profile on the system.
- Make sure that the job manager and any deployment managers and administrative agents have been started.
- Register nodes with the job manager.
- Adjust the job polling interval at the node, if necessary. The default value for the polling interval is 30 seconds and is appropriate for initial testing or for small production environments. A larger interval would be appropriate for some scenarios, such as a branch office network, and necessary for larger numbers of managed nodes. When setting the polling interval, you will need to balance the CPU capacity of the system running the job manager along with the number of nodes and the nodesâ polling interval.
The job manager provides several useful features for submitting and tracking jobs for application server and deployment manager nodes. These features are available in both the console and wsadmin. The console provides full command assistance support to help with the new job manager administrative tasks:
- Job types: Job manager includes a large number of predefined job types for frequently performed tasks. In addition, there is a job type for running wsadmin scripts.
- Search capabilities: You can easily locate and specify targets, resources, and status for jobs in large environments.
- Target grouping: You can group nodes together so that you can reuse the same âdistribution listâ for different jobs.
- Job scheduling and expiration: You can make jobs available immediately or at some point in the future. You can schedule them to run repeatedly and to expire when they are no longer useful or valid.
- Job status: You can retrieve the status of all the jobs managed through a job manager and retrieve the details in the job history.
- E-mail notification: You can receive e-mail notification of the completion of jobs.
The job manager console is a new administrative console introduced in WebSphere Application Server Network Deployment V7. Like other WebSphere consoles, it is based on the IBM Integrated Solutions Console (ISC) infrastructure with similar navigation and style. It has new features to support the new flexible management style:
The job manager console job submission wizard takes you through your options step-by-step to submit a job. It helps you by providing a list of job types to choose from. It has search capabilities to aid in finding job targets and resources used in jobs. For simplicity, efficiency and flexibility, the job submission user interface permits you to specify targets by individual target names or by predefined groups of targets.
Most job types have one or more required or optional parameters. The wizard shows the parameters for the chosen job type and enables you to enter your values. When parameters are node resources like application servers, or clusters, the wizard provides search capabilities for you to find the parameter values.
After setting the parameters, you can schedule a job to become available for its targets immediately or at some future date and time. Jobs are not available forever. By default, they expire in 24 hours. Alternatively, you can specify when the job will expire either by date or by elapsed time. You can change the default expiration time in the job manager configuration.
You might also want to run certain jobs on a recurring basis. Job scheduling permits you to specify an interval during which a job is available to its target nodes. For example, you might want to schedule an inventory job to become available every Friday evening at midnight and become unavailable Monday morning at 4:00 AM.
Finally, you can also specify one or more e-mail addresses for notification of job status. To use this feature, you must also configure a mail session and configure it in the job manager.
After you submit the job, a status table is displayed indicating the job identifier, its description, its state, when it became active, and when it will expire (Figure 11). In addition, the table includes a graphical representation summarizing the status of the job on each of its targets.
Each row of the status display represents the submission of a job and its status on each of its targets. The graphic portion shows the number of targets in each state and the width of the bar for each state is proportional to the number of targets in that state. Narrow bars without a number value indicate that no target is in those states.
Figure 11. Job status display
The job manager console has some differences from other WebSphere consoles that are worth understanding. The differences are related to job metadata, asynchronous jobs, and search capabilities:
The job manager console is used to submit jobs, check the status of jobs, and configure the job manager. Jobs can be submitted to nodes that have been registered with the job manager. The registration process sends a description of the capabilities of the node to the job manager. These capabilities are described using metadata and the content of the job manager console is driven off this metadata. For example, when you first create a job manager profile and start the console before registering any nodes, there are no jobs that can be performed. At this point, the only usable function in the job manager console is the configuration of the job manager server, mail sessions, security, and troubleshooting functions. If you try to submit a job, there are no known jobs to submit.
After the first node is registered, you see the capabilities of that node in the job manager console. For example, if you registered a base application server node, you would see jobs available related to servers, applications, and wsadmin. If, after that, you also registered a deployment manager with the job manager, you would see additional capabilities of the deployment manager, such as cluster management. The capability of the node is managed at the node. If new capabilities are introduced at the node, they are communicated to the job manager via an inventory job. After the inventory job completes successfully, the new capabilities are known to the console.
Asynchronous nature of jobs
When jobs are submitted, they are not necessarily run right away. The administrative agents and deployment managers pull jobs from the job manager when they are online, based on their configured polling intervals. Because jobs can also be created with intervals of allowable execution, if the job is not retrieved during a validity period, it is not run. The job status summary page presents four types of job completion status. These are successful, partially successful, failed, or incomplete. Status is displayed in a chart (Figure 11) that lets you to drill down to look at which nodes have completed under each state. The chart indicates status by color, as well as position in the summary bar.
When you submit a job, it will start out with incomplete status. When the job is retrieved, the detailed status will show it as distributed, then in progress. The normal progression of jobs at that point is for their status to go to successful, partially successful, or failed. It takes at least two polling cycles to retrieve a job and then return results, and it could take additional cycles depending on how long the job actually takes to process on the node. The polling cycle time is configurable at the administrative agent or the deployment manager. You might find that the status summary shows incomplete nodes, but when you drill down, no nodes show in the list. When you refresh the status summary, you will find that either the node completed or failed in that window of time and is no longer incomplete.
Find and search
All the information pertaining to nodes, resources, and jobs are maintained in a database at the job manager. Because the job manager can be used to manage a large number of nodes, information filtering is an important feature. If you go to view nodes in the Network Deployment console, all nodes in the topology are displayed. In the job manager, nodes as well as other job manager artifacts are retrieved by query. Each page with a collection table includes controls for finding objects that meet search criteria. You can set one or more query criteria to limit the results of the query. It is not necessary to set all fields unless you specifically want to limit the search based on that field. Each time you set find parameters, they are saved in your user console preferences for use the next time that page is loaded. You can modify your query or start over as needed. For each query, command assistance is available to see how the query fields you have set affect the AdminTask that is invoked to retrieve data.
The administrator role is required for you to register or deregister a base node with the administrative agent. When working with the administrative agent itself, the specific administrative roles related to the operation being performed are needed for the administrative agent. When working with the administrative subsystem, the required administrative roles to the registered base node are needed.
These roles are needed to use the job manager:
- Registering/unregistering with job manager: administrator
- Submitting a job: operator
- Changing job manager configuration: configurator
- Reading job manager configuration or job history: monitor
When a job runs on a registered base node or deployment manager, the user must have privileges that include the role required for that job. For example, a job to create a new application server requires a minimum configurator role on either the base node or Network Deployment cell.
The administrative agent and job manager support two different basic security configurations: same security domain and different security domains. In the first configuration, all the cells in the topology share the same user registry, and therefore, the same security domain. The same is true of the administrative agent and its registered base nodes, and also any job manager or Network Deployment cells in the topology. In the second topology, all the cells are configured with different user registries, and therefore different security domains.
For the administrative agent topology, when a user logs in to the JMX connector port of an administrative subsystem, or chooses the registered node from the administrative console, the authorization table for the base node is used. In the same user registry example in Figure 12, John is authorized as administrator for the first base node, but is not authorized for the second node. Mary is authorized as configurator for the second node, but is not authorized for the first node. In the different user registries example in the same figure, John can login to job manager as an operator with his user name and password. John can also login to the deployment manager as a monitor with his user name and password. Although this example shows that he has the same user name for both the job manager and the deployment manager, he might just as likely have different user names and passwords.
Figure 12. User registries
When transferring a job from the job manager to the administrative agent or the deployment manager, security information about the job submitter is also transferred. This is used to authenticate and authorize the user while running the job. The user security information passed with a submitted job can be in two forms: user name and password, or security tokens:
- Passing user name and password
As part of the job submission, the user might specify a user name and password. When the job reaches the administrative subsystem or the deployment manager, the user name and password is used to login. This is similar to the user specifying user name and password to login through the administrative console or through a JMX connector. For example, in the case of same user registry, if John submits a job against the first base node, he can specify his user name and password as part of the job. The user name and password is used to login against the first administrative subsystem, and the job will run. If John were to submit a job against the deployment manager cell or the second base node, the job will fail since John is not authorized.
For the different user registry configuration, John can submit a job against the deployment manager cell, specifying his user name and password for the deployment manager cell. When the job reaches the deployment manager, login succeeds, and the job will run. If John were to submit a job against the base nodes, access will be denied, and the job will fail.
- Passing security token
If the user does not specify the user name and password with a job, the user's current credential is automatically persisted as a security token in the job database. The token contains the user's security attributes, including customization attributes and groups. When a job is fetched, the token is used to authenticate and authorize against the administrative subsystem or the deployment manager. For the same user registry scenario, John may submit a job against the first base node without specifying user name and password. The job will run because John's credential is automatically propagated as a security token to the administrative subsystem, and used to authenticate and authorize him for the job. If John were to submit a job against the second base node or the deployment manager cell, the job will fail because his security token is not authorized in these two environments.
For the different user registries configuration, a user's security token will not automatically allow the submitted job to run against the administrative subsystem or the deployment manager. To enable a user's token for a different realm, you leverage the new multiple security domain feature introduced in V7. First, the job manager's realm must be established as a trusted realm for the registered base nodes and the deployment manager cell. In addition, the access ID of the user or the user's group from an the job manager needs to be imported into the local authorization table and given a role. Once this is done, the user can submit a job without passing user name and password.
As an example, John is an operator on the job manager, but his access ID is imported as an administrator in the first base nodeâs administrative authorization table. Even though John does not exist in the user registry of the base node, through passing the security token and the administrative authorization table definition, John is authorized as an administrator on the base node. John can submit a job for the first base node without having to specify a user name or password.
If John were to submit a job against the deployment manager, the job will fail. The reason is that John's security token is from the job manager realm, and John's access ID has not been authorized for the deployment manager. In this case, the administrator can export John's access ID from the job manager and import it to the deployment manager. Or, John can submit a job passing user name and password that he had with the deployment manager, which enables John to run jobs with monitor role.
The same mechanism works if the fine grained security feature (available since V6.1) is in use. You need to be authorized in the authorization table for a new authorization group. The authorization table might also contain an external access ID.
Mixed registries scenario
It is possible to create a more complex topology, where some cells share the same user registry and some don't. In such a topology, the following rules of thumb apply:
- You can always specify a user name and password during job submission if you have a user name and password recognized by the target node or deployment manager.
- If the job manager and the target node or deployment manager have the same user registry, then job submission does not require a user name and password. You do, however, need to be authorized for the target node or deployment manager.
- If the job manager and the target node or deployment manager have different user registries, and trusted realms have been established, and the access ID of the job submitter or job submitter's group has been imported into the administrative authorization table of the target node or deployment manager, then job submission does not require a user name and password either.
With two new management topologies in IBM WebSphere Application Server V7, you now have complete flexibility in how you manage your application serving environments. The base application server environment provides a simple, single server environment. The new administrative agent topology enables you to manage multiple base profiles efficiently on a single computing system. The familiar deployment manager topology provides a tightly coupled distributed environment supporting multiple servers on multiple systems. Lastly, the new job manager topology enables you to manage administrative agent and deployment manager topologies from a single job-oriented console while adding important new capabilities, such as job scheduling, e-mail notification of results, and the ability to define groups of node targets to enable efficient and consistent management of large environments.
Application Server V7 Information Center
Whatâs new in WebSphere Application Server V7
Administration for WebSphere Application Server V7.0, Part 1: An overview of administrative enhancements
System management for WebSphere Application Server V6, Part 3: Easier system management with profiles
developerWorks WebSphere Application Server zone
Michael Cheng has been a contributor to IBM middleware technology for over a decade, ranging from Object Request Broker, Enterprise Java Beans, web services, and system management. Michael is currently the release architect for WebSphere Application Server.
Roger Cundiff is a software engineer with IBM working in the WebSphere Application Server development organization in Austin, Texas. He started the development team for application server systems management in Austin ten years ago and is currently working on flexible management and other management features.
Cindy High has worked on the WebSphere Application Server in Austin, Texas since 2001. Her 25 years experience in the software industry has spanned finance industry, network communications, software services, and WebSphere systems management and console development. Cindy's current work includes the Job Manager console.