Using Oracle Database File System for disaster recovery on IBM Business Process Manager

Because business process management plays a very important role in an enterprise architecture, it's critical to make sure that BPM is an integral part of any enterprise disaster recovery plan. This article introduces a disaster recovery strategy for customers who are using Oracle® Data Guard for disaster recovery on existing systems and want to extend it to include IBM® Business Process Manager.

Yu Zhang (zhangyzy@cn.ibm.com), BPM SVT Architect, IBM

Yu Zhang photoYu Zhang is a system verification test (SVT) architect for business process management (BPM) at the IBM China Software Development Lab. He has rich experience in test methodologies and deep technical background in the J2EE and BPM areas. He is currently focused on BPM high availability and disaster recovery testing.



Eric Herness (herness@us.ibm.com), Distinguished Engineer, IBM

Eric Herness photoEric Herness is an IBM Distinguished Engineer and is the Chief Architect for business process management (BPM) in IBM Software Group. Eric is also the CTO for the business unit focused on BPM and operational decision management (ODM), where he leads the architects who define product and technical direction for the business.

Eric has worked with many large customers as they have adopted BPM and ODM approaches. He has had key lead architectural roles in WebSphere for more than 15 years. Eric has an MBA from the Carlson School at the University of Minnesota.



Hong Yan Wang (whongyan@cn.ibm.com), BPM SVT Tester, IBM

Hong Yan Wang photoHong Yan Wang is an SVT tester for business process management (BPM) at the IBM China Software Development Lab. She has over 10 years experience in software development and testing.



20 March 2013

Disaster recovery strategies

Following are some common types of disaster recovery solutions:

  • Database-based disaster recovery solutions. Database systems such as IBM DB2® have specific features that can be used to replicate the primary database data to a standby database. These solutions are commonly adopted by applications that interact with only a single database.
  • Host-based disaster recovery solutions. Modern operating systems such as AIX® and Linux® have built-in features that can provide point-in-time snapshots. These features are widely used by applications that directly use files to store data and don't require a short recovery point objective (RPO) and recovery time objective (RTO).
  • Fabric-based disaster recovery solutions. These methods are focused on transferring data within the storage network from a source fabric to a destination fabric using special hardware like IBM SAN Volume Controller which provides both Global and Metro mirror capabilities [1] to implement data protection. These solutions are attractive because they can accommodate different storage subsystems.
  • Storage subsystem-based disaster recovery solutions. These are also called controller-based solutions because they use a storage controller to transfer data from a source storage subsystem to a destination storage subsystem. Usually controller-based solutions are homogeneous, with data copied between disk arrays from the same manufacturer and often within the same family of products. A dedicated transport channel is commonly required to link the two sites. For example, Global or Metro Mirror [1] can be configured between two IBM System Storage DS series systems to implement disaster recovery.

IBM BPM disaster recovery requirements

BPM is all about connecting different systems to achieve end-to-end execution and visibility. IBM BPM will inevitably run in a heterogeneous environment, driving different systems to complete business transactions. In order to achieve this, IBM BPM leverages WebSphere Application Server to coordinate multiple transactional systems. This means that, generally speaking, database-level disaster recovery strategy is not viable for IBM BPM because transaction recovery information is persisted in a file system and cannot be managed by a database's replication process. Combining file system-level replication with database-level replication together will also not work because this strategy can't guarantee point-in-time data consistency [2]. Therefore, BPM requires an application-independent disaster recovery solution to guarantee data consistency. This [3] article describes a general disaster recovery strategy for IBM BPM that can be adopted in a production environment.


Combining DBFS and Data Guard for disaster recovery

The Oracle® Database File System (DBFS) is a feature that creates a standard file system interface on top of files and directories that are stored in database tables. DBFS is similar to NFS (Network File System) in that it provides a shared network file system that looks like a local file system. So if transaction recovery logs can be configured to use DBFS and both BPM schemas and involved data sources are in the same database, they can be replicated to a standby Oracle database and in-doubt transactions can be recovered in the event of a disaster at the primary data center. In the next section, we'll describe how to set up a typical environment and configure DBFS for IBM BPM and Oracle Data Guard for replication.

Because DBFS is available only on Oracle 11g R2 or above, and the DBFS client requires FUSE to act as a normal file system, this solution applies only to customers who are using IBM BPM with an Oracle 11g R2 or above database, and the Fuse-based File System (FBFS) client hosting system is Linux-based and its kernel version is 2.6.27 or above (required for the FUSE NFS export support). This approach described in this article provides an alternative to hardware-managed replication strategies.


Implementation details

This section describes how to set up a typical disaster recovery environment, including how to configure DBFS for IBM BPM and Oracle Data Guard for replication to implement a DBFS-based disaster recovery strategy.

Solution topology

Figure 1 shows the topology of the proposed solution that we'll use to demonstrate how to configure DBFS with Oracle Data Guard to achieve database-level disaster recovery. The key element in this solution is configuring WebSphere transaction logs and compensation logs in DBFS so that they can be replicated to the standby site by Oracle Data Guard. We'll walk through the steps to set up this environment to implement this solution. This topology is consistent with the recommended approach for laying out golden topologies for IBM BPM [4].

Figure 1. Solution topology
Solution topology

Following is a high-level overview of the set-up procedures, which will be described in detail later in the article:

  1. On the DBFS server, create a tablespace to hold the DBFS filesystem, grant necessary permissions and execute an Oracle script to create the file system.
  2. On the DBFS server, install and configure FUSE (required to enable access to the new file system using typical operating system commands (such as ls and mkdir) instead of invoking the commands through the Oracle dbfs_client.
  3. On the DBFS client, verify that operating system libraries are visible and at the appropriate level to enable DBFS commands to succeed.
  4. On the DBFS client, create a mount point and use dbfs_client commands to mount the DBFS file system on the Linux operating system. Verify that the DBFS can be accessed using Linux file commands.
  5. Start the NFS server software to manage distributed access to the file system backed by the Oracle DBFS.
  6. On the BPM server, use the mount command to make the remote DBFS point available on the local machine.
  7. Use the WebSphere Application Server administrative console to configure the BPM server's transaction log and compensation log onto the mount point.

A possible alternative configuration adds a third machine providing NFS server functionality. The steps required to configure an external NFS server are identical to the ones described here, except that the FUSE, DBFS client, and NFS server steps are executed on the third machine.

Overview of BPM disaster recovery lab environment

The lab environment used for this solution consists of two BPM environments:

  • A primary production site
  • A standby disaster recovery site

The two data centers were connected by a LAN for testing, but in a real disaster recovery topology, more distance would be expected between the primary and recovery sites. For demonstration purposes, the two data centers have the same hardware and software configuration but different IP addresses. During normal processing, the entire IBM BPM environment at the standby site is stopped and only the standby Oracle database is started for replication purposes.

Table 1 shows the internal hardware and software configuration we used for testing the primary and standby sites.

Table 1. Software configuration on primary and standby sites
Site Host information Installed components Other configuration
Primary
  • OS: Red Hat Enterprise Linux Server R6.1
  • Hostname: rehl217.cn.ibm.com
  • IP: 9.115.198.217
Oracle 11g R2
Primary
  • OS: Red Hat Enterprise Linux Server R6.1
  • Hostname: dmgr.cn.ibm.com
  • IP: 9.115.198.57
BPM Advanced Edition v8.0
(DMGR node and Custom Node)
A host alias name is used for the Oracle machine in the hosts file 9.115.198.217 oracle.cn.ibm.com
Primary
  • OS: Red Hat Enterprise Linux Server R6.1
  • Hostname: custom.cn.ibm.com
  • IP: 9.115.198.58
BPM Advanced Edition V8.0
(custom node)
A host alias name is used for the Oracle machine in the hosts file 9.115.198.217 oracle.cn.ibm.com
Standby
  • OS: Red Hat Enterprise Linux Server R6.1
  • Hostname: rehl218.cn.ibm.com
  • IP: 9.115.198.218
Oracle 11g R2
Standby
  • OS: Red Hat Enterprise Linux Server R6.1
  • Hostname: dmgr.cn.ibm.com
  • IP: 9.115.198.67
A host alias name is used for the Oracle machine in the hosts file 9.115.198.218 oracle.cn.ibm.com
Standby
  • OS: Red Hat Enterprise Linux Server R6.1
  • Hostname: custom.cn.ibm.com
  • IP: 9.115.198.68
A host alias name is used for the Oracle machine in the hosts file 9.115.198.218 oracle.cn.ibm.com

Notes:

  1. For the sake of simplicity, the root user is used for the IBM BPM installation and NFS mountings, as described later.
  2. In a real production environment, it may not be possible to have the same host name for different machines or to use a static hosts file for DNS resolution. This configuration is for demonstration purposes only.

Set up the primary site

This section describes how to set up and configure the primary environment.

Install the BPM binaries

Install the IBM BPM binaries at the primary site. In this scenario, we will install IBM BPM on two machines. For the standby site, we won't install the BPM binaries; instead, we'll synchronize the installation and profile configuration from the primary site to the standby site later.

Note: It's also possible to install IBM BPM at the standby site and restore the primary site configuration to the standby site by using profile backup and restore commands.

Set up and configure Oracle

For the primary site, install Oracle as usual and create a BPM instance.

Use the database design tool DBDesignGenerator [5] to create BPM database scripts and run them on the primary Oracle database machine to create IBM BPM database objects.

Set up the DBFS server

To set up the DBFS server, do the following:

  1. Execute sqlplus and create a regular tablespace as the container to store files, as shown below. In this example, the tablespace name is dbfs_ts.
    # CONN / AS SYSDBA
    SQL>CREATE TABLESPACE dbfs_ts 
         DATAFILE '/u01/app/oracle/oradata/DB11G/dbfs01.dbf' 
         SIZE 1M AUTOEXTEND ON NEXT 1M;
  2. Execute sqlplus and create a database user and grant proper roles, as shown below. This user will be used by the DBFS client to work with Oracle. Here the user name is dbfs_user and the default tablespace is the dbfs_ts created in step 1.
    # CONN / AS SYSDBA
    SQL>CREATE USER dbfs_user IDENTIFIED BY Passw0rd
    DEFAULT TABLESPACE dbfs_ts QUOTA UNLIMITED ON dbfs_ts;
    
    SQL>GRANT connect, CREATE SESSION, RESOURCE, CREATE VIEW, DBFS_ROLE, 
    create table,create procedure TO dbfs_user;
  3. Execute dbfs_create_filesystem, as shown below, to create a file system on the tablespace. Here the file system name is staging_area.
    $cd $ORACLE_HOME/rdbms/admin
    $sqlplus dbfs_user/Passw0rd
    SQL> @dbfs_create_filesystem.sql dbfs_ts staging_area

Set up FUSE (Linux only)

In order to use standard file system commands such as ls and mkdir to access the Oracle DBFS, you need to install FUSE on the DBFS client machine. Refer to [6] for the steps to download and install FUSE. This section provides a short version for your reference.

You must enable the noforget option before installing FUSE. If this is not enabled, you will run into the transaction log failure exception in Listing 1.

Listing 1. Transaction log failure exception
0000007f MultiScopeRec A   
CWRLS0008E: Recovery log is being marked as failed. [ 2 transaction ]
0000007f MultiScopeRec I   
CWRLS0009E: Details of recovery log failure: 
java.lang.NullPointerException
at java.nio.ByteBuffer.put(ByteBuffer.java:795)
at java.nio.DirectByteBuffer.put(DirectByteBuffer.java:355)
at java.nio.ByteBuffer.put(ByteBuffer.java:824)
atcom.ibm.ws.recoverylog.spi.WriteableLogRecord.
<init>(WriteableLogRecord.java:82)

In order to enable the noforget option, you need to edit the FUSE source file helper.c and the fuse_main_common procedure from:

Listing 2. Original fuse_main_common procedure
static int fuse_main_common(int argc, char *argv[],
		const struct fuse_operations *op, size_t op_size,
	       void *user_data, int compat)
{
	struct fuse *fuse;
	char *mountpoint;
	int multithreaded;
	int res;
	fuse = fuse_setup_common(
               argc, argv, op, op_size, &mountpoint,
		   &multithreaded, NULL, user_data, compat);
	if (fuse == NULL)
		return 1;
	if (multithreaded)
		res = fuse_loop_mt(fuse);
	else
		res = fuse_loop(fuse);
	fuse_teardown_common(fuse, mountpoint);
	if (res == -1)
		return 1;
	return 0;
}

to:

Listing 3. Noforget enabled fuse_main_common procedure
static int fuse_main_common(int argc, char *argv[],
	       const struct fuse_operations *op, size_t op_size,
		  void *user_data, int compat)
{
	struct fuse *fuse;
	char *mountpoint;
	int multithreaded;
	int res;
		
	/*
	 *Add noforget option.
	*/
  char debug[] = {"-onoforget"};
  char *nargv[argc+1];
  for(i=0;i<argc;i++)
      nargv[i] = argv[i];
  nargv[argc] = debug; 

	fuse = fuse_setup_common(
                 argc+1, nargv, op, op_size, &mountpoint,
		     &multithreaded, NULL, user_data, compat);
	if (fuse == NULL)
		return 1;
	if (multithreaded)
		res = fuse_loop_mt(fuse);
	else
		res = fuse_loop(fuse);
	fuse_teardown_common(fuse, mountpoint);
	if (res == -1)
		return 1;
	return 0;
}

Then compile and install FUSE as show below:

Listing 4. Install and configure FUSE
# echo /usr/src/kernels/`uname -r`-`uname -p`
/usr/src/kernels/2.6.32-131.0.15.el6.x86_64-x86_64

# tar -xzvf fuse-2.9.1.tar.gz
# cd fuse-2.9.1
# ./configure --prefix=/usr 
	--with-kernel=/usr/src/kernels/2.6.32-131.0.15.el6.x86_64-x86_64
# make
# make install
# /sbin/depmod
# /sbin/modprobe fuse
# chmod 666 /dev/fuse
# echo "/sbin/modprobe fuse" >> /etc/rc.modules
# chmod +x /etc/rc.modules

In our example, we used FUSE version 2.9.1.

Set up the DBFS client

For the DBFS client, DBFS functionality is installed by default when installing Oracle. In this example, because the DBFS client and Oracle are running on the same machine, you don't need to install the Oracle client. If you'd rather have the DBFS client on a different machine than the Oracle machine (we'll describe other possible deployment patterns later), you need to install the Oracle client on the target machine. For detailed steps, refer to [6].

Once the DBFS client is installed, you need to check whether its shared object dependencies are complete. If they are not, you may run into an error when loading shared libraries when executing dbfs_client, as shown in Figure 2.

Figure 2. Error loading shared libraries
Error loading shared libraries

You can use the ldd command to discover the missing dependencies, as shown in Figure 3.

Figure 3. Find missing dependencies with ldd command
Find missing dependencies with ldd command

Add links for the missing shared objects to resolve the problems by switching to the root user and creating the links using the following commands, as shown in Listing 4.

Listing 5. Create shared object links
# echo "/usr/local/lib" >> /etc/ld.so.conf.d/usr_local_lib.conf
# export ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1
# cd /usr/local/lib 
# ln -s $ORACLE_HOME/lib/libclntsh.so.11.1 
# ln -s $ORACLE_HOME/lib/libnnz11.so
# ln -s /usr/lib/libfuse.so
# ldconfig

After the DBFS client is installed, verify the configuration using the following DBFS client commands:

  1. Use mkdir to create a new folder in DBFS as follows:
    $./dbfs_client dbfs_user@localhost:1521/orclfs.cn.ibm.com 
    --command mkdir dbfs:/staging_area/dir1

    Figure 4 shows example results from a mkdir and ls command.

    Figure 4. mkdir and ls command results
    mkdir and ls command results
  2. Use cp to copy the file into the DBFS filesystem as follows:
    $echo "great oracle fs" > test.txt
    ./dbfs_client dbfs_user@localhost:1521/orclfs.cn.ibm.com 
    --command cp test.txt

    Figure 5 shows the results from the cp command.

    Figure 5. cp command results
    cp command results
  3. Use the mount command to mount DBFS content onto a file system so that you can access its content using standard file system commands instead of the dbfs_client command. Make sure that /dbfs is created first and its owner is set to a user who can run dbfs_client. In our example, we made the owner oracle and the group dba.In the following example, dbfs is mounted to the /dbfs path.
    $./dbfs_client dbfs_user@localhost:1521/orclfs.cn.ibm.com 
    -o rw,oracle,direct_io /dbfs

    By default, /dbfs can only be accessed by a specific Oracle user. In order to make it accessible to the root user, you need to make following changes:

    • In the FUSE setting:
      [root@rehl217 dir1]# vi /etc/fuse.conf

      Add the following:
      # Allow users to specify the 'allow_root' mount option.
        user_allow_other
    • Remount with 'allow_root' option, as shown here:
      $./dbfs_client dbfs_user@localhost:1521/orclfs.cn.ibm.com 
      -o rw,oracle,allow_root,direct_io /dbfs

    Note: The mount may fail if the mount version is lower. The following exception will be reported if the mount version does not match.

    ./dbfs_client dbfs_user@localhost:1521/orclfs 
    -o rw,oracle,direct_io /dbfs
    /bin/mount: unrecognized option `--no-canonicalize'

    A higher mount version can be installed to fix the version issue, as shown here:

    #tar xvf util-linux-ng-2.18.tar.bz2
    #cd util-linux-ng-2.18
    #./configure --prefix=/usr/local/mount-new
    #make && make install
    #mv /bin/mount{,.off}
    #ln -sv /usr/local/mount-new/bin/mount /bin/

    Figure 6 shows the results from the mount command.

    Figure 6. Mount command results
    Mount command results

Export the NFS

The reason to involve NFS in the overall disaster recovery solution is that the transactions' automated peer recovery requires a distributed file system so that when one cluster member is unavailable for either hardware or software reasons, the other active cluster member can access the unavailable cluster member's transaction logs to recover any in-doubt transactions.

Up to now the DBFS is just like a normal file system, but it's still not accessible from remote machines. To export it, do the following:

  1. Issue the following command:
    [root@rehl217 dir1]# vi /etc/exports
       /dbfs/staging_area  *(sync,rw,fsid=1,no_root_squash)
  2. Restart the NFS service as follows:
       /etc/rc.d/init.d/nfs stop
       /etc/rc.d/init.d/nfs start

The DBFS is now ready for use. The next step is to mount it on all BPM machines, create BPM deployment environment and configure transaction logs and compensation logs to use the DBFS mount point.


Set up IBM BPM

For each IBM BPM custom node, you need to mount the DBFS point as follows:
#mount oracle.cn.ibm.com:/dbfs/staging_area /mnt/dbfs

At the primary site, install IBM BPM on the two machines and create a typical golden topology [4]>. When setting the Oracle server's address, we used the host alias name oracle.cn.ibm.com instead of the real host name or IP address. This made the standby site set-up less complex.

After creating the golden topology, start the deployment environment to make sure the configuration is correct, the deployment environment can be started successfully and there is no error in the log files. Once everything works well, you can configure the transaction logs and compensation logs on DBFS.

Log in to the admin console and, for each server, configure the transaction log directory and recovery log directory to the DBFS mount point. Because there are eight servers, you need to configure eight different paths under /mnt/dbfs.

Select a server and navigate to its transaction service page. Under General Properties, set the transaction log directory value to a dbfs path, as shown in Figure 7.

Figure 7. Set server's transaction log directory to dbfs
Set server's transaction log directory to dbfs

Navigate to the selected server's compensation service page. Under General Properties, set the recovery log directory value to a dbfs path, as shown in Figure 8. Repeat these steps to configure the transaction and compensation log directories for all servers to the same mount point and make sure all directories are configured to different paths.

Figure 8. Set server's compensation log directory to dbfs
Set server's compensation log directory to dbfs

For NFS version 4, you can enable automated peer recovery; otherwise you'll need to use manual recovery. Refer to [7] for a comprehensive discussion of this topic.

If automated peer recovery is desired, you need to check Enable high availability for persistent services, as shown in Figure 9.

Figure 9. Enable high availability for persistent services
Enable high availability for persistent services

Note: For IBM BPM Standard, the compensation service is not enabled, so there is no need to set the recovery log directory for the compensation service.


Set up the standby site

This section describes how to set up and configure the standby environment.

  1. Install Oracle on the standby site. The database will be created when you configure Oracle Data Guard.
  2. Refer to the Oracle Data Guard configuration guide [8} for the steps to configure Data Guard for the two Oracle instances.
  3. You don't need to configure the DBFS server because Data Guard configuration brings all database objects from the primary site to the standby site.
  4. The configuration steps for FUSE are the same as those for the primary site. Refer to the Set up Fuse section to set up and configure FUSE on the standby site.
  5. Because the DBFS client is already on the Oracle machine, no additional set up is required for the DBFS client.
  6. Because the standby site has the same hostname as the primary site, you don't need to re-install and configure the IBM BPM environment; instead, you can just copy the entire IBM BPM environment (both binary files and profiles) from the primary site to the standby site.

Verify the solution

Now that the primary site is running, you can start business process instances (either BPD or BPEL) and then failover to the standby site to verify the solution as follows:

  1. On the primary site, start several business process instances and make them wait on different activities.
  2. Shut down the primary site's Oracle machine.
  3. At the standby site, on the Oracle database server machine, activate the standby Oracle as follows:
    alter database recover managed standby database finish force; 
    alter database commit to switchover to primary WITH SESSION SHUTDOWN; 
    alter database open;
  4. On the standby site's Oracle machine, mount and export the DBFS file system as follows:
    //first create /dbfs mount point if it does not exist yet. Refer to primary site DBFS
    client set-up for more details.
    $./dbfs_client dbfs_user@localhost:1521/orclfs.cn.ibm.com -o rw,oracle,direct_io 
    /dbfs
    # vi /etc/exports
       /dbfs/staging_area  *(sync,rw,fsid=1,no_root_squash)
  5. Restart the NFS service as follows:
       /etc/rc.d/init.d/nfs stop
       /etc/rc.d/init.d/nfs start
  6. On the standby site, for each IBM BPM machine, mount the remote DBFS as follows:
    #mount oracle.cn.ibm.com:/dbfs/staging_area /mnt/dbfs
  7. On the standby site, start the Deployment Manager and the deployment environment.
  8. On the standby site, verify the following:
    • All servers can be started without any error
    • Transaction recovery can be processed successfully on one cluster member (you can check the SystemOut log to see which cluster member is selected to do transaction recovery)
    • Running process instances can move to the next step successfully.
    • New process instances can be started successfully.

Performance impacts

Internal measurements indicate that DBFS introduces performance overhead not present when using a native file system, but that the overhead is likely to be tolerable for production environments. As with any configuration change, performance measurements should be collected to for your environment to determine whether the performance impacts are acceptable.


Other deployment patterns

In the example solution in this article, the DBFS client is on the Oracle machine. This is the easiest approach, but may not be valid for real deployment. Figures 10 and 11 illustrate two alternative patterns, and Table 2 provides a brief comparison between those two patterns and the one used in this article.

Figure 10 illustrates a DBFS client installed on a separate NFS server (which must be a Linux system).

Figure 10. DBFS on a separate NFS server
DBFS on a separate NFS server

Figure 12 illustrates a patterns in which all BPM machines must have a DBFS client installed.

Figure 11. DBFS on BPM machines
DBFS on BPM machines
Table 2. Comparison of alternative deployment patterns
Pattern Manageability OS requirement Peer recovery
DBFS client on Oracle machine Medium Oracle must be installed on a Linux system Fully support automated peer recovery for NFS v4.
DBFS client on separate NFS server High NFS server must be a Linux system Fully support automated peer recovery for NFS v4.
DBFS client on BPM machine Low BPM must be installed on Linux system Not safe. Although DBFS provides a centralized access point, it does not have a global lock mechanism. Refer to [9] for more information.

Conclusion

In this article, you learned how to configure a disaster recovery solution for IBM Business Process Manager that combines Oracle DBFS and Data Guard to achieve database-level disaster recovery.

Long-running business processes are being used more frequently to support mission-critical business processes. The disaster-recovery approach outlined in this article leverages available support from Oracle, combined with basic features in WebSphere Application Server to provide disaster recovery that doesn't rely on SAN or storage-level replication techniques, but rather instead leverages only the database layer. The major benefits of this approach are that it is hardware-independent, and requires less configuration and maintenance effort, and shorter recovery time objective (RTO).


Acknowledgments

The authors would like to thank Ian Robinson for providing guidance on DBFS filesystem testing, Jon Hawkes and Neil Young for providing test scripts and technical support on the testing, Jing Jing Wei for helping with the test execution, and Karri S. Carlson-Newmann and Chris Richardson for reviewing this article and providing valuable suggestions and comments.

Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Business process management on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Business process management, WebSphere
ArticleID=870877
ArticleTitle=Using Oracle Database File System for disaster recovery on IBM Business Process Manager
publish-date=03202013