Edinburgh DPM Setup

From GridPP Wiki
Jump to: navigation, search

This page describes the DPM setup at Edinburgh. Comments and questions should be sent to Greig Cowan.

SRM Endpoint

srm://dpm.epcc.ed.ac.uk:8443/dpm/epcc.ed.ac.uk/home/

Supported VOs

alice, atlas, biomed, cms, dteam, lhcb, sixt

Admin Node

Our DPM node (dpm.epcc.ed.ac.uk) is both the admin and pool node, all services run on it, including the DPNS and mySQL database. This is not an ideal situation since this setup will not scale, however, we are limited by the available hardware. The node itself has 2 1GHz Intel XEON CPUs with 2GB of RAM and runs the complete LCG software stack ontop of SL3.0.5.

Disk Server

Since the admin node does not have much local disk space, approximately 3TB of space is NFS mounted onto the node. 1TB comes from dcache.epcc.ed.ac.uk and the other 2TB is NFS mounted from the University SAN. As mentioned in the dCache setup page, using NFS is not ideal since it does not lead to a high data transfer rate and the occurence of stale NFS file handles has led to the DPM instance not knowing which filesystems are available for it to use.

The Edinburgh Classic SE has been migrated to a DPM pool node and incorporated into the DPM instance as a separate pool. The migrated filesystem was defined as RDONLY to prevent any further data being written to the Classic SE node while still allowing access to the files stored on it (via both the SRM interface and GridFTP).

Optimisation

See Optimisation section in the Edinburgh dCache wiki page.

Additional Information

As the University SAN runs Solaris we found that we had to setup the NFS mount using the TCP instead of UDP. When using UDP, any attempt to write to the NFS mounted partition (i.e. touch filename) caused the NFS mount to lock up and go into status D according to `ps aux`.