My notes from setting up shared user accounts and home folders on a small cluster based on Debian 9.
I am setting up a small cluster for running some weather simulations using WRF. The cluster consists of 8 compute nodes, a fileserver and a utility node. I want to have shared home folders and login information for the users across all of the computers in the network. In Linux, this can be done by by installing a directory database server on one computer which holds the user accounts and authentication clients on the rest of the computers.
I used slapd server application from OpenLDAP. It comes from repositories:
apt install slapd ldap-utils ldapscripts
I reconfigured the package after the installation:
I used phpldapadmin as a GUI to access the database. Unfortunately, it is not in the repository for Debian 9. However, I was able to download the package manually and install it into the system:
wget http://ftp.us.debian.org/debian/pool/main/p/phpldapadmin/phpldapadmin_1.2.2-6_all.deb dpkg -i phpldapadmin_1.2.2-6_all.deb apt-get -f install apt install php-xml systemctl restart apache2.service
Then it has to be configured. The configuration is at
/etc/phpldapadmin/config.php and the following lines has to be edited according to the configuration of slapd:
$servers->setValue('server','name','Example LDAP'); $servers->setValue('server','base', array('dc=example,dc=com')); $servers->setValue('login','bind_id','cn=admin,dc=example,dc=com');
Then go to http://ip/phpldapadmin and login using the admin credentials.
In the GUI, I created two Organizational Units
users. Under groups, I have created a Posix Group with gid 1000. Then I added a user under users with uid also 1000.
The process of adding a new user through the phpadmin was quite painful so for next time I should figure out a better way of adding new users. The GUI is quite ok for browsing the database though.
The configuration on the client side was simpler.
The only thing I did was to install the module for the authentication system:
apt install libpam-ldapd
The package manager asked a bunch of questions and configured everything.
I’m using ZFS on the fileserver. So first step was to create a new ZFS dataset and set the mountpoint to /home
zfs create -o mountpoint=/home storage/home
The I had to install the necessary service and libraries:
apt install nfs-kernel-server
and share the dataset:
zfs set sharenfs=on storage/home
I had to install the utilities on the client:
apt-get install nfs-common
and mount the folder:
mount -t nfs fileserver1:/home /home
This was only a temporary test. To make the mounts permanent I used autofs.
apt install autofs
Then I configured autofs to automatically mount home directories after the user accesses the folder. Autofs will listen for file accesses and when it detects a request for path in /home/username/… it will try to dynamically mount the folder. First, the master configuration has to be modified. The file is at
/home /etc/auto.home --timeout 60
This means that autofs will watch the directory
/home and use the configuration file
/etc/auto.home to mount the folders. The configuration file
/etc/auto.home has to be created:
* -rw,soft,intr,rsize=8192,wsize=8192 fileserver1.wcc:/home/&
Star is used as a placeholder for the username and the ampersand will be replaced with the username when the fileserver is asked for the home directory.
I have used OpenLDAP to share the login credentials across machines in my cluster so that I can log in as the same user (including the same uid and git) on all of the machines. I have also configured sharing of users’ home folders using NFS and autofs so that everywhere I login I have the same home folder.