Migration from a classic to a DPM/SRM SE

From GridPP Wiki
Jump to: navigation, search

Migration from a classic to a DPM/SRM SE


* Position of the problem


The classic SE at Birmingham is equipped with a 1.8 TB RAID array with three partitions of equal size.

 [root@epgse1 root]# df -h | grep sd
 Filesystem            Size  Used Avail Use% Mounted on
 /dev/sda1             602G   33M  601G   1% /disk/f3a
 /dev/sdb1             602G   65G  536G  11% /disk/f3b
 /dev/sdc1             602G  4.6G  597G   1% /disk/f3c

The SE access point is 'storage' (value for CE_CLOSE_SE1_ACCESS_POINT in site-info.def)

 [root@epgse1 root]# ls /storage
 alice  atlas  babar  biomed  cms  dteam  hone  lhcb  sixt

We would like to migrate our classic SE to a DPM/SRM SE while preserving the data and ensuring that the VOs can still access their data after the migration.

Yaim will be used for the installation. The DPM, DPNS and the SRM will therefore all be deployed on the same machine.

* Preliminary steps

Before proceeding to the upgrade, it is essential to inform all the VOs you support or all the users possessing files on your SE (if manageable) about your plans. They should ensure they have replicas of their files elsewhere. They do not need to delete any files!


It is a good idea to write and register now a dummy file for testing purposes after the installation.

  • Yaim installation of DPM on the classic SE

Read first DPM Yaim Install for information on the yaim installation and the meaning of the DPM variables in site-info.def.

The relevant variables in the Birmingham site-info.def file are:

SE_TYPE=srm_v1
...
CE_CLOSE_SE1_ACCESS_POINT=/dpm/ph.bham.ac.uk/home
...
DPMDATA=/disk/f3b  Path to the filesystem used by DPM to store files
DPMMGR=dpmmgr           Username of the DPM database user
DPMUSER_PWD=**********  Password of the DPM database user
DPMFSIZE=200M           Default space that is reserved for a file that is to be stored in dpm
                        (We think this should be the size of the largest files you expect to be stored in
                         your SRM.)
DPM_HOST=$SE_HOST       dpm hostname
DPMPOOL=dpmPart         Name of the DPM pool that the DPMDATA filesystem shall be in
                        (This is just a label the pool properties are set later using the 
                         dpm-modifypool command.)

(The value DPMDATA was set to the name one of our paritions.)

We did not uninstall the classic SE and ran the yaim configuration scripts.

You need to disable (service stop and chkconfig off) globus-gridftp and to enable dpm-gsiftp.

* Testing

It is a good idea to read now DPM Install Checklist and to proceed with the tests described in DPM Testing

If the globus-url-copy test returns the following error:

error: the server sent an error response: 553 553 Could not 
determine cwdir: No such file or directory.

It is likely that you haven't diabled globus-gridftp and enabled dpm-gsiftp!

* Migration

We took the migration porgram on [1] and compiled it. Then, we ran the migration executable (Note that /storage/atlas was on the /disk/f3b = DPMDATA)

./migration epgse1 /storage/atlas/ epgse1 /dpm/ph.bham.ac.uk/home/atlas dpmPart

The output was very verbose:

...
/disk/f3b/atlas/generated/2005-07-29/ ==> /dpm/ph.bham.ac.uk/home/atlas/generated/2005-07-29
/disk/f3b/atlas/dtypaldos/ ==> /dpm/ph.bham.ac.uk/home/atlas/dtypaldos
/disk/f3b/atlas/dtypaldos/TopPhysics/ ==> /dpm/ph.bham.ac.uk/home/atlas/dtypaldos/TopPhysics
/disk/f3b/atlas/dtypaldos/TopPhysics/evgen/ ==> /dpm/ph.bham.ac.uk/home/atlas/dtypaldos/TopPhysics/evgen
/disk/f3b/atlas/dtypaldos/TopPhysics/evgen/dt.810002.evgen.ttbar_W2lTEST2/ ==>/dpm/ph.bham.ac.uk/home/atlas/dtypaldos/TopPhysics/evgen/dt.810002.evgen.ttbar_W2lTEST2
End of migration: Sat Oct  8 00:51:18 2005
Number of files migrated to epgse1: 8080
Duration: 0 days 0 hours 6 minutes 6 seconds
Number of files migrated per second: 22.08

  • Adding extra filesystems
 [root@epgse1 root]#  dpm-addfs --poolname dpmPart --server epgse1 --fs /disk/f3a
 [root@epgse1 root]#  dpm-addfs --poolname dpmPart --server epgse1 --fs /disk/f3a
 [root@epgse1 root]# dpm-qryconf
POOL dpmPart DEFSIZE 200.00M GC_START_THRESH 0 GC_STOP_THRESH 0 DEFPINTIME 0 PUT_RETENP 86400 FSS_POLICY maxfreespace GC_POLICY lru RS_POLICY fifo GID 0 S_TYPE -
                             CAPACITY 1.76T FREE 1.69T ( 96.1%)
 epgse1 /disk/f3a CAPACITY 601.08G FREE 600.46G ( 99.9%)
 epgse1 /disk/f3b CAPACITY 601.08G FREE 535.57G ( 89.1%)
 epgse1 /disk/f3c CAPACITY 601.08G FREE 596.10G ( 99.2%)

and run the migration scripts for all your VOs.

* Further testing