Lancaster dCache Setup

From GridPP Wiki
Jump to: navigation, search

SRM Endpoint

srm://fal-pygrid-20.lancs.ac.uk:8443/pnfs/lancs.ac.uk/data/<vo name>

Which VOs are supported? alice atlas babar biomed cms dteam dzero lhcb sixt

Admin node(s)

Our admin node is a single box running dual Xeon processors and 2GB of RAM, running SL3. This box runs both the "dcache-core" services, the pnfs databases and a gridftpdoor. It runs with a single network interface, and is connected physically into the same switch as the pool nodes, theoretically allowing Gb/s connections speeds between the nodes that comprise the dcache instance.

Disk Server(s)

There are 7 possible pool nodes, of which 5 are currently in use. The pool nodes are of similar spec to the admin node.

Each pool node has, mounted by SCSI cable, two 5.4 TB Infortrend raid boxes. Each box is RAID 5, and formated with an ext3 filesystem. Each box is also partitioned into 3 smaller (1.8TB)partitions, each of which is counted as a seperate dcache pool, for a total of 6 pools per pool node.

As well as running 6 pools each pool node also runs a gridftpdoor, for maximum ease of access to the instance.

Optimisation

For added robustness the replica manager is set to replicate between different hosts, to prevent replication to pools in the same device. The large number of gridftp doors allow for fast access, but there is still network tuning that could be done. We have thus far stayed away from using faster xfs filesystems due to a reluctance to rely on custom kernels.

Additional Information

Our dcache is connected by Gigabit link straight into the UKLight network, giving us a Gb/s light path straight to RAL. Soon we will also be connected to Manchester through the link.