Edinburgh dCache Setup

From GridPP Wiki
Jump to: navigation, search

This page gives a brief overview of the dCache setup at Edinburgh. Comments and questions should be sent to Greig Cowan. The latest version of dCache (currently 1.6.6-5) is installed on the nodes.

SRM Endpoint

srm://srm.epcc.ed.ac.uk:8443/pnfs/epcc.ed.ac.uk/data/

Supported VOs

atlas, alice, cms, lhcb, biomed, dteam, sixt, pheno

Admin Node

We have a single dCache admin node (srm.epcc.ed.ac.uk). This has 2 2.8GHz Intel XEON CPUs with 2GB of RAM. Ontop of SL3.0.5 and the LCG software stack, srm runs the dcache-core services, aswell as PNFS and the postgreSQL instance that holds the databases for each of the supported VOs. We no longer run the gdbm backend to PNFS (not since ~Nov 2005). We have the SRM door open on the head node (see endpoint above) aswell as a GridFTP door (note: need to add this door to the diagram below).

Disk Server

We have a single disk server (dcache.epcc.ed.ac.uk). This has 8 Intel XEON CPUs, running at 1.9GHz and with 16GB or RAM and runs RedHat Advanced Server 2.1. We use fibre channel to directly attach it to an IBM Dual FAStT900 22TB RAID (level 5) disk array. In addition, we use NFS to mount 10TB of storage onto dcache from the University SAN. Using NFS is not an ideal solution since the observed write rates are slow. This has been identified as an issue of concern during the GridPP service challenges when large data volumes have been transferred to Edinburgh. Using NFS also leads to problems with "Stale NFS file handles" cropping up. We may decide to move from NFS and directly attached dcache to the SAN. If we could run the SAN as a dCache pool node, this would be even better since it would allow us to run a second pool with another GridFTP door, leading to better load balancing. dcache runs the dcache-pool services and has GridFTP and GSI-dcap doors available for use.

The problem with slow writes to the NFS mounted pools as been addressed by setting up these pools as read only. The RAID pools are the write pools. The ideal setup would allow us to slowly flush the write pools into the read pools during inactive periods for the dCache. Unfortunately, the version of dCache deployed (1.6.6-5) does not allow this, however, 1.6.7/8 should have the functionality. Another possibility would be to set up the SAN as an HSM (i.e. not running the available disk as pools). Have a look at the following links for more information:

We may consider moving to RAID level 6 at some point in the future (this was reported as now being a viable option at HEPiX Rome 2006 conference). This will require buying new RAID controllers.

Optimisation

Work has been carried out on the optimisation of read and write performance when using dCache as the SRM frontend and disk pool manager to storage resources. Hardware at Glasgow was used during the tests.

The SC3 kernel tuning parameters have been applied to both nodes in order to improve performance. When running FTS transfers (i.e. during GridPP service challenges) only a single GridFTP parallel stream is used, but with multiple concurrent files.

Additional Information

Since the machine runs RHAS2.1, there are unsolved dependency issues with running the complete LCG software stack. In order to get the machine operating as a fully functioning dCache pool/door node, it is necessary to install the dCache software (no PNFS/postgreSQL required) and also components of the LCG security stack to allow for authentication of grid data transfers. It is also necessary to setup a cron job to pull across the dcache.kpwd file from the head node. Further information about this can be found in the dCache FAQ.

As the University SAN runs Solaris we found that we had to setup the NFS mount using the TCP instead of UDP. When using UDP, any attempt to write to the NFS mounted partition (i.e. touch filename) caused the NFS mount to lock up and go into status D according to `ps aux`.