How to Configure NFS Server Clustering with Pacemaker on CentOS 7 / RHEL 7

18 Responses

  1. Tomas says:

    Amazing article! I haven’t had a chance to try it yet, but does it work with NFSv4?

    • Yes Tomas, It will work with NFSv4

      • Tomas says:

        Thanks to this article, I’ve managed to get the whole NFSv4 pacemaker cluster deployed via Puppet.

        One small thing, you did group the resources together, but you didn’t set any order. I had a problem where the nfsroot resource failed because the nfsshare was not yet available. I’ve configured ordering constraints to resolve it.

  2. Antonius says:

    Hi,

    Is this type of fence devices works if i using another scenario such as app cluster?

  3. Tomas says:

    I just noticed that you use a shared disk on VirtualBox.

    What happens when you actually try to fence a node manually? For example:

    # pcs stonith fence nfs2

    I cannot see anything here showing that you tested it, and that it worked. I don’t use VirtualBox therefore genuinely curious.

  4. Daniel Cordero says:

    Hello.

    How can I make this procedure plus cifs shares?

    Greetings.

  5. Jérôme says:

    Great article ! Exactly my need.
    Just a question regarding resource configuration how much memory and cpu would you dedicate for each cluster node, for almost 50 users using a shared disk of almost 500GB?

  6. NILESH ALHAT says:

    not able to mount any suggestions?

  7. Paul.LKW says:

    I just added a disk as /dev/sdb, but when I issue the command ls -l /dev/disk/by-id I could not see what wwn-0x6001405e49919dad5824dc2af5fb3ca0 related to sdb, so I could not configure by your hints any any further !!! Any hints could provide ?

  8. mous says:

    i have question : if i have one lun that is HA NFS on 3 clusters nodes , x ,y and z . if we suppose that cluster is mounted on server x , can we add nfs mount point on servers y z using the VIP ?
    so the lun will be direct mounted to x and NFS mounted to Y and Z .
    if yes ? then if server x went down will the lun will be mounted on Y or Z even if the NFS mount exist ?

    • Pradeep Kumar says:

      Using Pacemaker we usually configure Active-Passive NFS cluster, All the services including VIP and NFS LUN will be available on active Node, let’s say x node, if due to some reasons this node went down then all services ( including NFS LUN and VIP) will be migrated to either y or z node.

  9. Paul.LKW says:

    I find also by this one “pcs resource create nfsshare Filesystem device=/dev/sdb1 directory=/nfsshare fstype=xfs –group nfsgrp”.
    Because modern Linux will interchange the device name so I got /dev/sdb1 and /dev/sdc1 interchanged on reboot at some time and I could not find pcs create will accept UUID so once the NFS node is down whether it could resume is an unknown factor by this method !!!

  10. kaushik sen says:

    perfect article for FC San storage and two physical Node Nfs cluster ,
    i`ve completed one of my project with the help of this article

    thank you 🙂

  11. kaushik sen says:

    could you please share the possible documentation if i use dell idrac for fencing …….

  12. Omar says:

    Hi Pradeep,
    I have got this error after applying step4: Define Fencing device for each cluster node.
    Error: Error: Agent ‘disk_fencing’ is not installed or does not provide valid metadata: Agent disk_fencing not found or does not support meta-data: Invalid argument (22)
    Metadata query for stonith:disk_fencing failed: Input/output error

  13. Phuong Nguyen says:

    Hi,
    I tried, everything went smoothly except the last step – moving NFS shares on client.
    I just could mount it on the nfs1.example.com, not another node.
    The “ip a” show that the virtual floating IP was linked to the nfs1 node, not the nfs2 node, and can not ping from nfs2.
    Any suggestions?

Leave a Reply

Your email address will not be published. Required fields are marked *