Personal tools
You are here: Home Documentation NFS sharedroot mini HowTo
Document Actions

NFS sharedroot mini HowTo

HowTo to build a NFS based diskless sharedroot cluster.

General Overview

Sometimes cluster nodes don't need to have high speed low latency storage access. Then creating a NFS based diskless sharedroot cluster is the right choice. A NFS based diskless sharedroot cluster is easy to install and easy to scale out. New servers can be added on demand. The shared root filesystem creates a single system image (SSI) cluster. The effort to manage the whole cluster is quite the same as to manage a single server.

The NFS based cluster nodes receive their initial network configuration from a dhcp server. The isolinux bootloader, kernel and the sharedroot initrd are downloaded from a tftp server.

Two scenarios are being described. One if the sharedroot again resides on a clusterfilesystem sharedroot and is reexported via NFS (Inline NFS-Server) and the other is when you just want to create a sharedroot cluster on a typical NFS Share (Extern NFS-Server*). So look out when those scenarios are specially described to move the right way.

DHCP server configuration

/etc/dhcpd.conf:

    #
    # DHCP Server Configuration file.
    #   see /usr/share/doc/dhcp*/dhcpd.conf.sample
    #
    ddns-update-style ad-hoc;
    allow booting;
    allow bootp;
    option routers 192.168.234.1;
    option subnet-mask 255.255.255.0;
    option domain-name "pxeboot.atix";
    option domain-name-servers 192.168.3.3;
    default-lease-time 21600;
    max-lease-time 43200;
    subnet 192.168.234.0 netmask 255.255.255.0 {
        range 192.168.234.10 192.168.234.20;
        deny unknown-clients;
    }

    group {
        #tftp server
        next-server 192.168.3.120;
        #This is the pxe bootloader file
        filename "pxelinux.0";
        # One host block per client. This network only has one.
        host nfs-node1 {
                option host-name "nfs-node1";
                hardware ethernet 00:0c:29:60:d2:10;
                fixed-address 192.168.234.10;
        }
        host nfs-node2 {
                option host-name "nfs-node2";
                hardware ethernet 00:0c:29:36:79:e3;
                fixed-address 192.168.234.11;
        }
    }

TFTP Server configuration

Installation

On a RHEL4 like system do:

      # up2date -i tftp-server

On a RHEL5 like system do:

      # yum install tftp-server

/etc/xinetd.d/tftp:

      # default: off
      # description: The tftp server serves files using the trivial file transfer \
      #       protocol.  The tftp protocol is often used to boot diskless \
      #       workstations, download configuration files to network-aware printers, \
      #       and to start the installation process for some operating systems.
      service tftp
      {
        socket_type             = dgram
        protocol                = udp
        wait                    = yes
        user                    = root
        server                  = /usr/sbin/in.tftpd
        server_args             = -vv -s /var/tftpboot
        disable                 = no
        per_source              = 11
        cps                     = 100 2
        flags                   = IPv4
     }

Isolinux Installation

On most linux distributions pxelinux.0 is included in the syslinux Package. RHEL4 packages pxelinux.0 with the system-config-netboot rpm. The next examples are based on a RHEL4/5 like distribution.

  • installation (RHEL4):
        # up2date -i system-config-netboot
    
  • installation (RHEL5):
        # yum install syslinux
    
  • pxelinux.0:
        # cp /tftpboot/linux-install/pxelinux.0 /var/tftpboot/
    
  • kernel and initrd has to be copied to /var/tftpboot/
  • create the cfg directory

    # mkdir /var/tftpboot/pxeboot.cfg

  • Copy appropriate vmlinuz and initrd (created with comoonicstools) to /var/tftpboot/pxeboot.cfg.
  • /var/tftpboot/pxeboot.cfg/default:
        # This is the default pxelinux config file.
        timeout 100
        prompt 1
        default linux
    
        LABEL linux
        KERNEL vmlinuz-2.6.9-42.0.10.ELsmp
        APPEND initrd=initrd_sr-2.6.9-42.0.10.ELsmp
    

NFS Server

Possibilities

Inline NFS Server

The inline NFS server is part of the sharedroot cluster. This type of NFS server can be a single server with ext3 based root filesystem or a gfs sharedroot cluster.

Extern NFS Server

The extern NFS server is standalone, i.e. not part of the sharedroot cluster. This type of NFS server can be a linux NFS server or any other NAS server or appliance.

Inline NFS Server setup

  • /etc/cluster/cluster.conf:
          <!DOCTYPE cluster SYSTEM "/opt/atix/comoonics-cs/xml/rh-cluster.dtd">
          <cluster config_version="1" name="nfs_cluster">
            <cman expected_votes="1" two_node="0"/>
            <fence_daemon clean_start="1" post_fail_delay="0" post_join_delay="3"/>
            <clusternodes>
                    <clusternode name="masternode" votes="1" nodeid="1">
                            <com_info>
                                    <syslog name="masternode"/>
                                    <rootvolume name="/dev/mapper/VolGroup00-LogVol00" fstype="ext3" mountopts="ro"/>
                                    <eth name="eth0" mac="00:0C:29:B1:B8:DA" ip="" mask="" gateway=""/>
                            </com_info>
                    </clusternode>
                    <clusternode name="nfs-node1" votes="1" nodeid="2">
                            <com_info>
                                    <syslog name="masternode"/>
                                    <rootvolume name="masternode:/" fstype="nfs" mountopts="nolock"/>
                                    <eth name="eth0" mac="00:0C:29:60:D2:10" ip="dhcp" mask="" gateway=""/>
                            </com_info>
                    </clusternode>
                    <clusternode name="nfs-node2" votes="1" nodeid="3">
                            <com_info>
                                    <syslog name="masternode"/>
                                    <rootvolume name="masternode:/" fstype="nfs" mountopts="nolock"/>
                                    <eth name="eth0" mac="00:0C:29:36:79:E3" ip="dhcp" mask="" gateway=""/>
                            </com_info>
                    </clusternode>
            </clusternodes>
          </cluster>
    

Extern NFS Server setup

  • /etc/cluster/cluster.conf:
          <!DOCTYPE cluster SYSTEM "/opt/atix/comoonics-cs/xml/rh-cluster.dtd">
          <cluster config_version="1" name="nfs_cluster">
            <cman expected_votes="1" two_node="1"/>
            <fence_daemon clean_start="1" post_fail_delay="0" post_join_delay="3"/>
            <clusternodes>
                    <clusternode name="nfs-node1" votes="1" nodeid="2">
                            <com_info>
                                    <syslog name="masternode"/>
                                    <rootvolume name="nfsserver:/export/sharedroot" fstype="nfs"/>
                                    <eth name="eth0" mac="00:0C:29:60:D2:10" ip="dhcp" mask="" gateway=""/>
                            </com_info>
                    </clusternode>
                    <clusternode name="nfs-node2" votes="1" nodeid="3">
                            <com_info>
                                    <syslog name="masternode"/>
                                    <rootvolume name="nfsserver:/export/sharedroot" fstype="nfs"/>
                                    <eth name="eth0" mac="00:0C:29:36:79:E3" ip="dhcp" mask="" gateway=""/>
                            </com_info>
                    </clusternode>
            </clusternodes>
          </cluster>
    
  • Copy a preinstalled OS Version on the NFS-Share (only for Extern NFS-Server) for inline everything is already installed as described in the other Howtos.

Independent Steps

  • Install the comoonics rpms. the following revision are required:
    • comoonics-bootimage
    • comoonics-bootimage-extras-nfs
    • comoonics-bootimage-extras-network
  • You can use up2date/yum to install the software:
          # up2date -i comoonics-bootimage comoonics-cs-py comoonics-ec-py comoonics-bootimage-extras-nfs comoonics-bootimage-extras-network
    
          # yum install comoonics-bootimage comoonics-cs-py comoonics-ec-py comoonics-bootimage-extras-nfs comoonics-bootimage-extras-network
    
  • Comoonics initrd:
          # mkinitrd -f  /boot/initrd_sr-$(uname -r).img $(uname -r)
    
  • Copy initrd to tftp server:

    # scp /boot/initrd_sr-$(uname -r).img tftp-server:/var/tftpboot/

  • /boot/grub/grub.conf (only for Inline NFS-Server):
          default=0
          timeout=5
          splashimage=(hd0,0)/grub/splash.xpm.gz
          hiddenmenu
          title Red Hat Enterprise Linux ES Comoonics Master (2.6.9-42.EL)
              root (hd0,0)
              kernel /vmlinuz-2.6.9-42.0.10.ELsmp ro root=/dev/VolGroup00/LogVol00
              initrd /initrd_sr-2.6.9-42.0.10.ELsmp.img
    
  • Create the CDSL tree (For the inline NFS-Server this is to be done like descibed in the other howtos, for extern NFS-Server this has to be done on the NFS-Export). In both cases the FS is mounted on /mnt/newroot:
          [root@localhost ~]#  com-mkcdslinfrastructure -r /mnt/newroot -i
    
  • Mount CDSL:
          [root@localhost ~]# mount --bind /mnt/newroot/cluster/cdsl/1/ /mnt/newroot/cdsl.local/
    
  • Make /var hostdependent:
          [root@localhost ~]# com-mkcdsl -r /mnt/newroot -a /var
    
  • Make /var/lib shared:
           [root@localhost ~]# com-mkcdsl -r /mnt/newroot -s /var/lib
    
  • /etc/mtab:
          [root@localhost ~]# rm -f /mnt/newroot/etc/mtab
          [root@localhost ~]# ln -s /proc/mounts /mnt/newroot/etc/mtab
    
  • Make /etc/sysconfig/network hostdependend (and edit all versions residing in /mnt/newroot/cluster/cdsl/?/etc/sysconfig/network):
          [root@localhost ~]# com-mkcdsl -r /mnt/newroot -a /etc/sysconfig/network
    
  • Configure the export /etc/exports (Only for Inline NFS-Server):
          /       192.168.0.0/255.255.0.0(rw,no_root_squash,sync)
    

Reboot

Now all servers can be rebooted into a sharedroot cluster. :-)


Powered by Plone CMS, the Open Source Content Management System

This site conforms to the following standards: