DB2 Version 9.7 for Linux, UNIX, and Windows
After you install a DB2 Server product > Post-installation tasks > Partitioned database environment >

Format of the DB2 node configuration file

The db2nodes.cfg file is used to define the database partition servers that participate in a DB2® instance. The db2nodes.cfg file is also used to specify the IP address or host name of a high-speed interconnect, if you want to use a high-speed interconnect for database partition server communication.

The format of the db2nodes.cfg file on Linux® and UNIX® operating systems is as follows:

dbpartitionnum hostname logicalport netname resourcesetname
dbpartitionnum

, hostname, logicalport, netname, and resourcesetname are defined in the following section.

The format of the db2nodes.cfg file on Windows® operating systems is as follows:

dbpartitionnum hostname computername logicalport netname resourcesetname

On Windows operating systems, these entries to the db2nodes.cfg are added by the db2ncrt or START DBM ADD DBPARTITIONNUM commands. The entries can also be modified by the db2nchg command. You should not add these lines directly or edit this file.

dbpartitionnum
A unique number, between 0 and 999, that identifies a database partition server in a partitioned database system.

To scale your partitioned database system, you add an entry for each database partition server to the db2nodes.cfg file. The dbpartitionnum value that you select for additional database partition servers must be in ascending order, however, gaps can exist in this sequence. You can choose to put a gap between the dbpartitionnum values if you plan to add logical partition servers and wish to keep the nodes logically grouped in this file.

This entry is required.

hostname
The TCP/IP host name of the database partition server for use by the FCM. This entry is required.

If host names are supplied in the db2nodes.cfg file, instead of IP addresses, the database manager will dynamically try to resolve the host names. Resolution can be either local or through lookup at registered Domain Name Servers (DNS), as determined by the OS settings on the machine.

Starting with DB2 Version 9.1, both TCP/IPv4 and TCP/IPv6 protocols are supported. The method to resolve host names has changed.

While the method used in pre-Version 9.1 releases resolves the string as defined in the db2nodes.cfg file, the method in Version 9.1 or later tries to resolve the Fully Qualified Domain Names (FQDN) when short names are defined in the db2nodes.cfg file. Specifying short configured for fully qualified host names, this may lead to unnecessary delays in processes that resolve host names.

To avoid any delays in DB2 commands that require host name resolution, use any of the following work arounds:

  1. If short names are specified in the db2nodes.cfg files and the operating system host name file, specify the short name and the fully qualified domain name for host name in the operating system host files.
  2. To use only IPv4 addresses when you know that the DB2 server listens on an IPv4 port, issue the following command:
    db2 catalog tcpip4 node db2tcp2 remote 192.0.32.67 server db2inst1 with "Look up IPv4 address from 192.0.32.67"
  3. To use only IPv6 addresses when you know that the DB2 server listens on an IPv6 port, issue the following command:
    db2 catalog tcpip6 node db2tcp3 1080:0:0:0:8:800:200C:417A server 50000 with "Look up IPv6 address from 1080:0:0:0:8:800:200C:417A"
logicalport
Specifies the logical port number for the database partition server. This field is used to specify a particular database partition server on a workstation that is running logical database partition servers.

DB2 reserves a port range (for example, 60000 - 60003) in the /etc/services file for inter-partition communications at the time of installation. This logicalport field in db2nodes.cfg specifies which port in that range you want to assign to a particular logical partition server.

If there is no entry for this field, the default is 0. However, if you add an entry for the netname field, you must enter a number for the logicalport field.

If you are using logical database partitions, the logicalport value you specify must start at 0 and continue in ascending order (for example, 0,1,2).

Furthermore, if you specify a logicalport entry for one database partition server, you must specify a logicalport for each database partition server listed in your db2nodes.cfg file.

This field is optional only if you are not using logical database partitions or a high speed interconnect.

netname
Specifies the host name or the IP address of the high speed interconnect for FCM communication.

If an entry is specified for this field, all communication between database partition servers (except for communications as a result of the db2start, db2stop, and db2_all commands) is handled through the high speed interconnect.

This parameter is required only if you are using a high speed interconnect for database partition communications.

resourcesetname
The resourcesetname defines the operating system resource that the node should be started in. The resourcesetname is for process affinity support, used for Multiple Logical Nodes (MLNs). This support is provided with a string type field formerly known as quadname.

This parameter is only supported on AIX®, HP-UX, and Solaris Operating System.

On AIX, this concept is known as "resource sets" and on Solaris Operating System it is called "projects". Refer to your operating systems documentation for more information on resource management.

On HP-UX, the resourcesetname parameter is the name of a PRM group. Refer to "HP-UX Process Resource Manager. User Guide. (B8733-90007)" documentation from HP for more information.

On Windows operating systems, process affinity for a logical node can be defined through the DB2PROCESSORS registry variable.

On Linux operating systems, the resourcesetname column defines a number that corresponds to a Non-Uniform Memory Access (NUMA) node on the system. The system utility numactl must be available as well as a 2.6 Kernel with NUMA policy support.

The netname parameter must be specified if the resourcesetname parameter is used.

Example configurations

Use the following example configurations to determine the appropriate configuration for your environment.

One computer, four database partitions servers
If you are not using a clustered environment and want to have four database partition servers on one physical workstation called ServerA, update the db2nodes.cfg file as follows:
   0          ServerA        0
   1          ServerA        1
   2          ServerA        2
   3          ServerA        3
Two computers, one database partition server per computer
If you want your partitioned database system to contain two physical workstations, called ServerA and ServerB, update the db2nodes.cfg file as follows:
   0          ServerA        0
   1          ServerB        0
Two computers, three database partition server on one computer
If you want your partitioned database system to contain two physical workstations, called ServerA and ServerB, and ServerA is running 3 database partition servers, update the db2nodes.cfg file as follows:
   4          ServerA        0
   6          ServerA        1
   8          ServerA        2
   9          ServerB        0
Two computers, three database partition servers with high speed switches
If you want your partitioned database system to contain two computers, called ServerA and ServerB (with ServerB running two database partition servers), and use a high speed interconnect called switch1 and switch2, update the db2nodes.cfg file as follows:
   0          ServerA        0              switch1
   1          ServerB        0              switch2
   2          ServerB        1              switch2

Examples using resourcesetname

These restrictions apply to the following examples:

AIX example

Here is an example of how to set up the resource set for AIX operating systems.

In this example, there is one physical node with 32 processors and 8 logical database partitions (MLNs). This example shows how to provide process affinity to each MLN.

  1. Define resource sets in /etc/rset:
    DB2/MLN1:
        owner     = db2inst1
        group     = system
        perm      = rwr-r-
        resources = sys/cpu.00000,sys/cpu.00001,sys/cpu.00002,sys/cpu.00003
    
    DB2/MLN2:
        owner     = db2inst1
        group     = system
        perm      = rwr-r-
        resources = sys/cpu.00004,sys/cpu.00005,sys/cpu.00006,sys/cpu.00007
    
    DB2/MLN3:
        owner     = db2inst1
        group     = system
        perm      = rwr-r-
        resources = sys/cpu.00008,sys/cpu.00009,sys/cpu.00010,sys/cpu.00011
    
    DB2/MLN4:
        owner     = db2inst1
        group     = system
        perm      = rwr-r-
        resources = sys/cpu.00012,sys/cpu.00013,sys/cpu.00014,sys/cpu.00015
    
    DB2/MLN5:
        owner     = db2inst1
        group     = system
        perm      = rwr-r-
        resources = sys/cpu.00016,sys/cpu.00017,sys/cpu.00018,sys/cpu.00019
    
    DB2/MLN6:
        owner     = db2inst1
        group     = system
        perm      = rwr-r-
        resources = sys/cpu.00020,sys/cpu.00021,sys/cpu.00022,sys/cpu.00023
    
    DB2/MLN7:
        owner     = db2inst1
        group     = system
        perm      = rwr-r-
        resources = sys/cpu.00024,sys/cpu.00025,sys/cpu.00026,sys/cpu.00027
    
    DB2/MLN8:
        owner     = db2inst1
        group     = system
        perm      = rwr-r-
        resources = sys/cpu.00028,sys/cpu.00029,sys/cpu.00030,sys/cpu.00031
  2. Enable memory affinity by typing the following command:
       vmo -p -o memory_affinity=1
  3. Give instance permissions to use resource sets:
    chuser capabilities=
        CAP_BYPASS_RAC_VMM,CAP_PROPAGATE,CAP_NUMA_ATTACH  db2inst1
  4. Add the resource set name as the fifth column in db2nodes.cfg:
    1 regatta 0 regatta DB2/MLN1
    2 regatta 1 regatta DB2/MLN2
    3 regatta 2 regatta DB2/MLN3
    4 regatta 3 regatta DB2/MLN4
    5 regatta 4 regatta DB2/MLN5
    6 regatta 5 regatta DB2/MLN6
    7 regatta 6 regatta DB2/MLN7
    8 regatta 7 regatta DB2/MLN8

HP-UX example

This example shows how to use PRM groups for CPU shares on a machine with 4 CPUs and 4 MLNs and 24% of CPU share per MLN, leaving 4% for other applications. The DB2 instance name is db2inst1.

  1. Edit GROUP section of /etc/prmconf:
      OTHERS:1:4::
    	db2prm1:50:24::
     	db2prm2:51:24::
      	db2prm3:52:24::
     	db2prm4:53:24::  
  2. Add instance owner entry to /etc/prmconf:
       db2inst1::::OTHERS,db2prm1,db2prm2,db2prm3,db2prm4
  3. Initialize groups and enable CPU manager by entering the following command:
       prmconfig -i
       prmconfig -e CPU
  4. Add PRM group names as a fifth column to db2nodes.cfg:
       1 voyager 0 voyager db2prm1 	
       2 voyager 1 voyager db2prm2 	
       3 voyager 2 voyager db2prm3 	
       4 voyager 3 voyager db2prm4

PRM configuration (steps 1-3) may be done using interactive GUI tool xprm.

Linux example

On Linux operating systems, the resourcesetname column defines a number that corresponds to a Non-Uniform Memory Access (NUMA) node on the system. The numactl system utility must be available in addition to a 2.6 kernel with NUMA policy support. Refer to the man page for numact1 for more information about NUMA support on Linux operating systems.

This example shows how to set up a four node NUMA computer with each logical node associated with a NUMA node.

  1. Ensure that NUMA capabilities exist on your system.
  2. Issue the following command:
    $ numactl --hardware
    Output similar to the following displays:
    available: 4 nodes (0-3)
    node 0 size: 1901 MB 
    node 0 free: 1457 MB 
    node 1 size: 1910 MB 
    node 1 free: 1841 MB 
    node 2 size: 1910 MB 
    node 2 free: 1851 MB 
    node 3 size: 1905 MB 
    node 3 free: 1796 MB
  3. In this example, there are four NUMA nodes on the system. Edit the db2nodes.cfg file as follows to associate each MLN with a NUMA node on the system:
    0 hostname 0 hostname 0 
    1 hostname 1 hostname 1 
    2 hostname 2 hostname 2 
    3 hostname 3 hostname 3

Solaris example

Here is an example of how to set up the project for Solaris Version 9.

In this example, there is 1 physical node with 8 processors: one CPU will be used for the default project, three (3) CPUs will used by the Application Server, and four (4) CPUs for DB2. The instance name is db2inst1.

  1. Create a resource pool configuration file using an editor. For this example, the file will be called pool.db2. Here's the content:
       create system hostname
       create pset pset_default (uint pset.min = 1)
       create pset db0_pset (uint pset.min = 1; uint pset.max = 1)
       create pset db1_pset (uint pset.min = 1; uint pset.max = 1)
       create pset db2_pset (uint pset.min = 1; uint pset.max = 1)
       create pset db3_pset (uint pset.min = 1; uint pset.max = 1)
       create pset appsrv_pset (uint pset.min = 3; uint pset.max = 3)
       create pool pool_default (string pool.scheduler="TS";  
            boolean pool.default = true)
       create pool db0_pool (string pool.scheduler="TS") 
       create pool db1_pool (string pool.scheduler="TS") 
       create pool db2_pool (string pool.scheduler="TS") 
       create pool db3_pool (string pool.scheduler="TS") 
       create pool appsrv_pool (string pool.scheduler="TS") 
       associate pool pool_default (pset pset_default) 
       associate pool db0_pool (pset db0_pset) 
       associate pool db1_pool (pset db1_pset) 
       associate pool db2_pool (pset db2_pset) 
       associate pool db3_pool (pset db3_pset) 
       associate pool appsrv_pool (pset appsrv_pset)
  2. Edit the /etc/project file to add the DB2 projects and appsrv project as follows:
       system:0:::: 
       user.root:1:::: 
       noproject:2:::: 
       default:3:::: 
       group.staff:10:::: 
       appsrv:4000:App Serv project:root::project.pool=appsrv_pool 
       db2proj0:5000:DB2 Node 0 project:db2inst1,root::project.pool=db0_pool 
       db2proj1:5001:DB2 Node 1 project:db2inst1,root::project.pool=db1_pool 
       db2proj2:5002:DB2 Node 2 project:db2inst1,root::project.pool=db2_pool 
       db2proj3:5003:DB2 Node 3 project:db2inst1,root::project.pool=db3_pool 
  3. Create the resource pool: # poolcfg -f pool.db2.
  4. Activate the resource pool: # pooladm -c
  5. Add the project name as the fifth column in db2nodes.cfg file:
       0 hostname 0 hostname db2proj0
       1 hostname 1 hostname db2proj1
       2 hostname 2 hostname db2proj2
       3 hostname 3 hostname db2proj3
[ Top of Page | Previous Page | Next Page | Contents ]