Friday, February 17, 2012

Direct NFS speeds up Data Pump

Oracle introduced the Direct NFS feature in Oracle Database 11gR1. This provides improved performance over regular kernel NFS for the Oracle database. I like to think of it as NFS on steroids. It can be used for datafiles with certified, archivelogs, RMAN backups and Data Pump files. There are a lot of merits to Direct NFS including direct I/O and better scalability. Kevin Closson has documented quite a few of them on his blog. However, I just wanted to demonstrate how this feature provided significant improvement for our Data Warehouse loading process.

Our database platform is AIX 6.1. Both source and target databases are Oracle 11gR2. The ERP database is a single-instance database and DWH is running on a 2-node RAC.  Recently we moved our source database (ERP) from one datacenter to another separated by a WAN.
In the initial setup, the dump files were exported to a local JFS2 filesystem on the ERP database server and mounted on the DWH over NFS.  Before the move to the new data center the data pump export was running between 25 and 30 mins while the import time ran between 2 hrs 30 mins to 3 hrs.
After the data center move the import times drastically increased to over 10 hrs!  Obviously this blew our SLA out of the water. While we tried to determine the reason for the significant import times it became apparent that the import across the WAN was the culprit.

I had been playing with direct NFS for a while and considered using it previously for our archivelog backups to a NFS share. However, I didn't have any success in getting it to work. I posted several questions in the My Oracle Support Community about my issue and the consensus I received was that it only worked for datafiles. After several configuration changes including doubling the datapump parallelism it was still not giving us nowhere near our original import load times. I decided I to test the dNFS feature again to see how it could help.

Configuring DNFS involves a few steps:
1. Filesystems have to be already mounted over regular NFS using the correct mount options. On AIX this means having mount points defined in /etc/filesystems

2. Next I created the dNFS library file under the $ORACLE_HOME/lib directory
mv libodm11.so libodm11.so_stub
ln -s libnfsodm11.so libodm11.so
3. dNFS can use a file called the oranfstab which includes the NFS server, paths to the server as well as the mout points to provide service to the database. The file can be located in either the $ORACLE_HOME/dbs directory for a single database or in /etc/oranfstab for system-wide use by multiple databases. In addition, the mount points can be read from the mounted filesystems (/etc/mtab in Linux or /etc/filesystems in AIX).
I chose to use the /etc/filesystems because of it's simplicity. Multiple paths are not used since we already use 10GbE LACP on the NAS share.

Once this setup was done on both the source and DWH the databases were then restarted for the changes to take effect. Once the databases have restarted you can monitor the alert log to verify if the ODM library has been loaded. There are several views that show the usage of dNFS. Query the v$dnfs_servers view to see a list of NFS servers and mount points. This is only populated the first time a file residing in NFS is accessed. There is also v$dnfs_files view which lists currently accessed files over dNFS.

Using dNFS saw our import times reduced to 1hr 30 mins consistently. Whoa! Didn't expect that. It is worth noting that the export time increased slightly due to writing to a NFS mount instead of a local filesystem. This was trivial compared to the time savings from the import process using dNFS.
Now we're looking at implementing dNFS for our archivelog backups on NFS.
If you're running Oracle 11g and using NFS for the database, I'd recommend switching to dNFS. It's awesome.

No comments:

Post a Comment

IOUG Collaborate 14

IOUG Collaborate 14
IOUG Collaborate 14 Las Vegas