https://www.gridpp.ac.uk/w/api.php?action=feedcontributions&user=Michael+kenyon&feedformat=atomGridPP Wiki - User contributions [en]2024-03-29T08:07:51ZUser contributionsMediaWiki 1.22.0https://www.gridpp.ac.uk/wiki/GlasgowGlasgow2008-01-31T16:21:44Z<p>Michael kenyon: </p>
<hr />
<div>#REDIRECT [http://www.scotgrid.ac.uk/wiki/index.php/Main_Page]</div>Michael kenyonhttps://www.gridpp.ac.uk/wiki/UKI-ScotGrid-GlasgowUKI-ScotGrid-Glasgow2008-01-25T13:48:35Z<p>Michael kenyon: </p>
<hr />
<div>#REDIRECT [[Glasgow]]</div>Michael kenyonhttps://www.gridpp.ac.uk/wiki/Glasgow_cold-start_proceduresGlasgow cold-start procedures2008-01-25T13:47:44Z<p>Michael kenyon: </p>
<hr />
<div>=== How to bring [http://www.scotgrid.ac.uk/wiki UKI-ScotGrid-Glasgow] out of power failure induced slumber ===<br />
<br />
* prereqs - Check Aircon, Basement routers stable<br />
* Bring back power to underfloor sockets via reset button on main switchboard<br />
* UPS in server rack should power up, as should all the APC Masterswitches (but NO machines should be on apart from svr031<br />
* Bring up svr031 fully (disabling nagios may be advisable?)<br />
* Power up disk037 <br />
powernode --host=disk037 --on <br />
* Power up the rest of the disk servers <br />
powernode --host=disk032-036,disk038-041 --on<br />
* bring up the servers<br />
powernode --host=svr016-030 --on # hey, check the actual list of servers - it keeps expanding....<br />
* bring up the nodes (add in a stagger to prevent network flooding)<br />
for i in `seq -w 1 140`; do powernode --host=node$i ; sleep 20 ; done<br />
<br />
<br />
Next you'll need to check that the NFS mounts to disk037 are behaving, the state of the torque workers<br />
svr016 $> pbsnodes -l<br />
and that the monitoring infrastructure looks normal (ganglia, nagios, emails etc)<br />
<br />
<br />
<br />
[[category:glasgow]]</div>Michael kenyonhttps://www.gridpp.ac.uk/wiki/Glasgow_Local_Users_Getting_Started_GuideGlasgow Local Users Getting Started Guide2008-01-25T13:47:00Z<p>Michael kenyon: </p>
<hr />
<div>The [http://www.scotgrid.ac.uk/wiki Glasgow] cluster is open to all researchers at the University using grid methods. In addition the Glasgow cluster is part of [http://www.scotgrid.ac.uk/ ScotGrid], the [http://www.eu-egee.org/ EGEE project] and the [http://lcg.web.cern.ch/LCG/ WLCG grid] - if you are a member of a supported virtual organisation (VO) from anywhere in the world you will be able to [http://www.scotgrid.ac.uk/wiki/index.php/Submitting_jobs_to_Glasgow#Users_in_a_Supported_VO use our cluster].<br />
<br />
==Preparing to Use the Cluster==<br />
<br />
===Prerequisite: Get a Grid Certificate===<br />
<br />
You won't be able to use the cluster at all unless you have a grid certificate. Within the UK certificates are issued by the UK eScience CA: http://www.grid-support.ac.uk/ca/.<br />
<br />
You will have to show photo ID to the registrar - currently at Glasgow this is John Watt in NeSC (Kelvin Building).<br />
<br />
If you have never used certificates before please [http://www.grid-support.ac.uk/content/view/188/101/ read the basic documentation].<br />
<br />
===Join a VO===<br />
<br />
The easiest way to use grid resources, even just the ones at Glasgow, is to join a ''Virtual Organisation'', or VO. This is like a group for the grid.<br />
<br />
If your project doesn't yet have a VO then you can join the [https://voms.gridpp.ac.uk:8443/voms/gridpp/webui/request/user/create join the <tt>gridpp</tt> VO]. You'll need to use the browser which has your certificate in it. When you've verified that this method works we will help you to setup a real VO for your project.<br />
<br />
==Get Local Access To The Cluster==<br />
<br />
===Asking for Access===<br />
<br />
As we like to control all access to the cluster via people's grid certifiicates we need to know your identity on the grid, which is the ''distinguished name'', or DN, of your certificate.<br />
<br />
The CA explains [http://www.grid-support.ac.uk/content/view/55/42/ how to find out your DN], so that you can tell us. <br />
<br />
Once you have your DN then:<br />
<br />
# You ''must'' agree to be bound by the latest version of the [https://edms.cern.ch/file/428036/3/Grid_AUP.pdf JSPG ''Grid Acceptable Use Policy'' document].<br />
# Email your agreement and your certificate's DN to the cluster team at [mailto:uki-scotgrid-glasgow@physics.gla.ac.uk].<br />
<br />
If you have access to a grid enabled copy of ssh (called ''gsissh'') then this is all you need to do. If you do not then you should also email us an ssh v2 key. We will allow this ''vanilla'' normal ssh access only from on campus, whereas gsissh, which is more secure, is allowed from anywhere.<br />
<br />
We'll email you when we have setup your local account.<br />
<br />
===Logging In===<br />
<br />
====gsissh====<br />
<br />
Login to <tt>svr020.gla.scotgrid.ac.uk</tt> on port 2222:<br />
<br />
$ grid-proxy-init<br />
...<br />
$ gsissh -p 2222 svr020.gla.scotgrid.ac.uk<br />
<br />
====Plain ssh====<br />
<br />
You'll need to know your local username (usually glaNNN, gla123 for instance), and have already sent us a public SSH key as described above. Then login to <tt>svr020.gla.scotgrid.ac.uk</tt> on the usual ssh port:<br />
<br />
$ ssh gla123@svr020.gla.scotgrid.ac.uk<br />
<br />
==Prepare Your Work Environment==<br />
<br />
Now that you've logged in to the system you can compile your applications and prepare any necessary data files.<br />
<br />
===Shared Paths===<br />
<br />
For security reasons we cannot share the home directory you have on <tt>svr020</tt> with the cluster's batch system. So anything you put in your home directory is invisible to your running jobs!<br />
<br />
Instead we have a shared data area, where you should put your files. At the moment this is <tt>/cluster/share/USERNAME</tt> (USERNAME is again usually something like glaNNN), however we recomend always refering to this directory through the environment variable <tt>CLUSTER_SHARED</tt> as we don't guarrantee the absolute path.<br />
<br />
===Coping Files===<br />
<br />
You can use <tt>gsiscp / scp</tt> to copy data and source codes into your shared area. One minor gotcha with <tt>gsiscp</tt> is that specifiying the port 2222 is done with an uppercase <tt>-P</tt>, not lowercase (which is for preserving permissions). See the man page for all the options).<br />
<br />
===Copying your Grid Certificate===<br />
<br />
One file you definately have to copy onto the cluster is a copy of your grid certificate, which you will need to actually run jobs. To get this onto the cluster<br />
<br />
# [http://www.grid-support.ac.uk/content/view/17/42/1/1/ Backup your certificate to a file] from your browser. (Please remember to use a strong password!)<br />
# Copy the backup file onto the cluster.<br />
# Unpack the file on <tt>svr020</tt> into ''globus'' format - the [http://www.grid-support.ac.uk/content/view/67/42/ CA describes how to do this].<br />
<br />
===Tools and Compilers===<br />
<br />
Currently the environment provided on <tt>svr020</tt> is a standard Scientific Linux 4 environment. If there are any standard SL4 tools you need which are not installed just ask us and we'll install them. Otherwise you should prepare these tools as part of your work environment.<br />
<br />
==Submit Jobs==<br />
<br />
===Basic Job Submission===<br />
<br />
Once your environment is setup you can actually run some jobs. For basic job submission you can use the <tt>edg-job-*</tt> tools described in the [http://www.scotgrid.ac.uk/wiki/index.php/Glasgow_Job_Submission_Quickstart_Guide Glasgow Job Submission Quickstart Guide]. These are also [http://www.gridpp.ac.uk/deployment/users/submit.html introduced on the GridPP pages] and described in more detail in the [https://edms.cern.ch/file/722398//gLite-3-UserGuide.html gLite 3 Users Guide].<br />
<br />
If you have problems we can help, so please ask.<br />
<br />
===Managing Large Numbers of Jobs===<br />
<br />
Once you've mastered single job submission, you can look at using [http://ganga.web.cern.ch/ganga/ ganga] to manage large numbers of jobs. We have a [[Glasgow Ganga Quickstart Guide]] which should get you up and running.<br />
<br />
==Security==<br />
<br />
We know you'd rather not think about it, but it's important.<br />
<br />
As part of joining a VO you will be required to sign that VO's ''Acceptable Use Policy''. We will only enable VOs whose AUP's are acceptable to us. If you wish login access to the cluster, then you also must agree to the [https://edms.cern.ch/file/428036/3/Grid_AUP.pdf JSPG AUP] as stated above (read ''VO'' as ''my research project'' where necessary).<br />
<br />
The two points of the AUP we wish to draw particular attention to are:<br />
<br />
# ''You shall [...] protect your GRID credentials (e.g. private keys, passwords)'' i.e. you must use suitable passphrases on grid certificates and ssh keys.<br />
# ''You shall immediately report any known or suspected security breach'', which also includes informing us as a site. If there is a security emergency please inform the email addresses listed [[ScotGrid#Glasgow | here]].<br />
<br />
Thanks!<br />
<br />
[[Category: ScotGrid]] [[Category: Users]] [[Category: Glasgow]]</div>Michael kenyonhttps://www.gridpp.ac.uk/wiki/Glasgow_New_ClusterGlasgow New Cluster2008-01-25T13:44:54Z<p>Michael kenyon: </p>
<hr />
<div>==Glasgow Cluster Management System (YPF)==<br />
<br />
The new [http://www.scotgrid.ac.uk Glasgow] cluster management system replaces, simplifies and improves on the CVOS system which was run before svr031 had its nasty accident.<br />
<br />
The cluster configuration is stored in a [http://www.sqlite.org SQLite] database (any other database could be trivially used), and from here all neccessary information about the cluster can be extracted, e.g., /etc/hosts can be generated and the APC ports can be queried to power down sections of the cluster, etc.<br />
<br />
This information is used for maintinance and basic installs; however, most of the node configuration is delegated to [[cfengine]], with scripts converting the information into a format that cfengine likes if necessary.<br />
<br />
<br />
===Administration Details===<br />
<br />
* [[Glasgow Cluster Database Schema]]<br />
* [[Glasgow YPF Repository]]<br />
<br />
===Cluster Tasks===<br />
<br />
* [[Glasgow Cluster New Host | Adding a new machine to the cluster]]<br />
<br />
==Older Documentation (When svr031 ran CVOS)==<br />
<br />
* [[Glasgow New Cluster Installer]]<br />
* [[Glasgow New Cluster Tasklist]]<br />
<br />
[[Category: ScotGrid]]<br />
[[Category: YPF]] [[Category: Glasgow]]</div>Michael kenyonhttps://www.gridpp.ac.uk/wiki/Glasgow_svr031Glasgow svr0312008-01-25T13:29:37Z<p>Michael kenyon: </p>
<hr />
<div>Crib notes for the rebuild of '''svr031''' at [http://www.scotgrid.ac.uk/wiki Glasgow]<br />
<br />
This box needs to provide the following resources:<br />
<br />
* Node Rebuilding<br />
** Central database of "stuff" for nodes (sqlite)<br />
** tftp<br />
** dhcpd<br />
** Kickstart configs<br />
** APC power commands<br />
* Package / distro Mirroring - See list of [http://www.scotgrid.ac.uk/wiki/index.php/Glasgow_Mirrors Glasgow Mirrors]<br />
** mirror script (rsync / lftp)<br />
** apache<br />
* monitoring / documentation<br />
** ganglia (The CVOS supplied ones in /slave seem to be heavily based on SLES not RHEL - or at least the startup script begins:<br />
<pre>#! /bin/sh<br />
# Copyright (c) 1995-2000 SuSE GmbH Nuernberg, Germany.<br />
#<br />
# Author: Kurt Garloff <feedback@suse.de><br />
# Oliver M�ssinger <olivpass@web.de><br />
#<br />
# init.d/gmond<br />
</pre><br />
** nagios<br />
* Backups (''userdel [[User:Greig_cowan|greig]]?'')<br />
<br />
<br />
<br />
<br />
<br />
<br />
[[Category:ScotGrid]] [[Category: Glasgow]]</div>Michael kenyonhttps://www.gridpp.ac.uk/wiki/Glasgow_New_Cluster_TasklistGlasgow New Cluster Tasklist2008-01-25T13:28:05Z<p>Michael kenyon: </p>
<hr />
<div>==Installer==<br />
<br />
* Run <tt>firstbootwatcher</tt> as a daemon.<br />
** <span style="color:green">Done</span> Running from cron every minute, but does the buisness.<br />
* Run <tt>postbootinstaller</tt> automatically, with a limit to the number of installers running simultaneously.<br />
** <span style="color:blue">Superceeded</span> Using cfengine for post kickstart installation now.<br />
* Implement flexible logging (to syslog, and to console if a controlling tty).<br />
** <span style="color:green">Done</span> cfengine logs to syslog - but see centralise syslog action!<br />
* Restructure cfengine to work better with SVN directory layout.<br />
** <span style="color:red">Ongoing</span> Link from Colin is useful: http://sial.org/howto/cfengine/repository/<br />
* Put new YPF installer into SVN<br />
** <span style="color:green">Done</span> Repo on grid01.<br />
* Change autokick to use new YPF clusterdb, instead of old classes.conf file.<br />
** <span style="color:red">Ongoing</span> Should be easy - PHP has SQLite bindings.<br />
<br />
<br />
==Security==<br />
<br />
* Disable module loading on WNs.<br />
** <span style="color:red">Ongoing</span><br />
* Implement basic automatic integrity checks (like kickstart).<br />
** <span style="color:red">Ongoing</span> Can use cfengine to do this - it will checksum any file.<br />
* Disable unnecessary SUID binaries.<br />
** <span style="color:red">Ongoing</span> Can use cfengine to do this.<br />
* Outgoing packets checked and logged by NAT hosts.<br />
** <span style="color:yellow">Ongoing</span> David looking at shorewall on NAT boxes. No progress as of 2007-04, so fallback to simple iptables manipulation.<br />
* Disable root logins between ssh known hosts.<br />
** <span style="color:green">Partial</span> Known hosts authentication does not work for root.<br />
* Disable login from UI to other hosts - UI is the only place that users get a vanilla shell, so defend this.<br />
** <span style="color:green">Done</span> Login to UI is via gsissh.<br />
* Centralise Email on svr031<br />
** <span style="color:red">Ongoing</span> Will use exim on svr031. Need sendmail recipe for SL3 boxes.<br />
<br />
==Monitoring/Logging==<br />
<br />
* Ganglia<br />
** <span style="color:green">Done</span> Configuration files for grid servers, worker nodes and storage hosts controlled by cfengine.<br />
** <span style="color:yellow">In Progress</span> Configuration files for Nat boxes not working fully<br />
* Nagios<br />
** <span style="color:yellow">Done bu broken</span> Installed on svr031, but currently dead due to president's missing brain. Use monami as core for new sensors.<br />
* Central syslogging<br />
** <span style="color:green">Done</span>stanza in cfengine. Log to master - investigate better sysloggers?<br />
* MonAmi<br />
** <span style="color:yellow">Ongoing</span> DPM and NUT sensors deployed. Deploy new sensors as they are made available.<br />
<br />
===Local Accounting===<br />
<br />
* PBS Accounting - install Jamie's PBS->MySQL dump scripts.<br />
** <span style="color:green">Done</span> [http://svr031.gla.scotgrid.ac.uk/accounting/localaccounting].<br />
<br />
====Notes and Wish List====<br />
<br />
'''To do now:'''<br />
* Selecting an end date produces an end time at the ''beginning'' of that day, instead of the end.<br />
* Bar and line CPU efficiency plots for short time periods produce JPGraph errors.<br />
* Summary table should give wall clock ''and'' cpu times.<br />
* Stated values of CPU Hours and KSI2k hours are unclear - need to compare with potential.<br />
* Table of worker nodes should include a comissioning and decomissioning date, to enable the potential cpu hours and KSI2K to be calculated.<br />
<br />
'''Wish list:'''<br />
* For individual groups, need to design a per-user display. (Nice to map to DNs - work and privacy issues though!)<br />
* Should have a scheme for dribbling in data during the day, rather than having to wait for a log record to be complete before processing it (useful when cluster is busy).<br />
<br />
==Batch System==<br />
<br />
* Ensure <tt>TMPDIR</tt> properly defined.<br />
** <span style="color:green">Done</span>.<br />
* Investigate SGE as alternative ;-)<br />
** <span style="color:red">Ongoing</span> This is not entirely in jest (oh alright, is is...)<br />
<br />
==Storage==<br />
<br />
* Deploy more disk servers in production grid mode (19+2+1 disks - RAID 6 with 1 hot spare).<br />
** <span style="color:yellow">Ongoing</span> disk033,043,035,036 are production servers for DPM. disk032 can be deployed anytime. Greig using disk038-041 for dCache tests for next month. Be sensitive to ATLAS needs. N.B. disk032 still needs to go into correct mode.<br />
<br />
==Backups==<br />
<br />
* DPM/LFC databases.<br />
** <span style="color:yellow">Ongoing</span> Done for DPM, but need to automate rsyncing to masternode (cfengine).<br />
* Batch system configuration.<br />
** <span style="color:green">Done</span> Have copied maui configuration from old cluster, but need to add a fair share for local groups.<br />
* VO Tags.<br />
** <span style="color:red">Ongoing</span> Incorporate into a rolling rsync on masternode.<br />
<br />
* Installer subversion repository.<br />
** <span style="color:green">Done</span> but should institute some backups from grid01.<br />
<br />
==Efficiency==<br />
<br />
* Stop WNs downloading CRLs. These files should instead be copied from the CE.<br />
** <span style="color:green">Done</span> httpd installed on CE with <tt>/etc/grid-security/certificates</tt> exported. YAIM override function sets up mirroring of CRLs instead of direct downloads.<br />
<br />
==Networking==<br />
<br />
* Get nat nodes up and running<br />
** <span style="color:yellow">Ongoing</span> Instaledl NAT nodes as SL44 x86_64. David looking at shorewall.<br />
* Ensure http_proxy, no_proxy is defined in batch environment while direct http access is barred<br />
** <span style="color:green">Done</span>. There is an annoying bug in yum, which does not respect the <tt>no_proxy</tt> variable, so any scripts invoking yum need to undefine <tt>http_proxy</tt> and <tt>no_proxy</tt>, so these are not defined for root. In fact now have received exemption from webcache, so this is no longer needed (and has been undefined).<br />
<br />
==Grid Nodes==<br />
* Install UI for local access<br />
** <span style="color:green">Done</span> <tt>grid-mapfile</tt> mirrored from CE ensuring the same account for submitted jobs.<br />
* Install separate BDII to improve system stability<br />
** <span style="color:green">Done</span> <tt>svr021.gla.scotgrid.ac.uk</tt>. Some problems still seen with GRIS on CE.<br />
* Local LFC Only required for ALICE, so not really needed right now.<br />
** <span style="color:red">Ongoing</span> Only required for ALICE, so not really needed right now, but would be necessary for SGS VOs.<br />
* RB<br />
** <span style="color:green">Done</span> Improves job submission and local VO support. Installed on svr023, but not advertised in BDII.<br />
* Top Level BDII<br />
** <span style="color:green">Done</span> Shares svr019 with R-GMA.<br />
* Save R-GMA from itself<br />
** <span style="color:red">Ongoing</span> Anthony sent a cron job to sniff for signs of R-GMA putrefaction.<br />
<br />
==Documentation==<br />
<br />
* Proceedure for adding new users.<br />
** <span style="color:green">Done</span> [http://www.scotgrid.ac.uk/wiki/index.php/Glasgow_Add_New_Local_User Glasgow Add New Local User]<br />
* Documentation on job submission and storage use.<br />
** <span style="color:yellow">Progress</span> See [http://www.scotgrid.ac.uk/wiki/index.php/Main_Page#User_Information Glasgow User Information]. Anything else needed?<br />
<br />
<br />
[[Category: ScotGrid]]</div>Michael kenyonhttps://www.gridpp.ac.uk/wiki/Cfengine:_Glasgow_Worker_NodesCfengine: Glasgow Worker Nodes2008-01-25T13:25:45Z<p>Michael kenyon: </p>
<hr />
<div>This is a [[cfengine]] excerpt from [http://www.scotgrid.ac.uk/wiki Glasgow] showing how we manage our worker nodes with cfengine. Note that it's probably a good idea to split your cfengine file into sections once it gets big, but this is presented as a single <tt>cfagent.conf</tt> file for simplicity.<br />
<br />
Note that worker nodes are in the following classes: <tt>worker</tt>, <tt>grid</tt>, <tt>torque</tt>, <tt>autofs</tt>, <tt>scientific_sl_3</tt> and, of course, <tt>any</tt><br />
<br />
<pre><br />
##########################<br />
#<br />
# cfagent.conf for UKI-SCOTGRID-GLASGOW<br />
#<br />
# $Id: cfagent.conf 201 2006-11-22 12:43:00Z root $<br />
#<br />
##########################<br />
<br />
<br />
groups:<br />
worker = ( HostRange(node,1-140) )<br />
gridsvr = ( HostRange(svr,016-023) )<br />
disksvr = ( HostRange(disk,032-041) )<br />
<br />
# Nicer names for grid servers<br />
ce = ( svr016 )<br />
dpm = ( svr018 )<br />
mon = ( svr019 )<br />
ui = ( svr020 )<br />
sitebdii = ( svr021 )<br />
<br />
## Compound groups<br />
# Batch system nodes<br />
torque = ( ce worker )<br />
<br />
# Nodes which look at the autofs system<br />
autofs = ( worker ce ui )<br />
<br />
# All grid nodes<br />
grid = ( worker gridsvr disksvr )<br />
<br />
<br />
control: <br />
any::<br />
actionsequence = ( <br />
directories<br />
files<br />
links <br />
editfiles <br />
packages<br />
copy<br />
shellcommands<br />
tidy<br />
) <br />
<br />
domain = ( beowulf.cluster )<br />
skel = ( /var/cfengine/inputs/skel )<br />
scripts = ( /var/cfengine/inputs/scripts )<br />
syslog = ( on )<br />
ChecksumUpdates = ( on )<br />
<br />
DefaultPkgMgr = ( rpm )<br />
RPMcommand = ( /bin/rpm )<br />
RPMInstallCommand = ( "/usr/bin/yum -y install %s" )<br />
<br />
torque::<br />
torquesvr = ( 10.141.255.16 )<br />
torquequeues = ( dteam:atlas:alice:cms:lhcb:biom:pheno:zeus:sixt:ilc:babar:dzero:ngs:ops:glpnp:glpppt:glee:glbio )<br />
<br />
# It would be nicer if this could be defined more dynamically...<br />
scientific_sl_3::<br />
java = ( j2sdk1.4.2_12 )<br />
<br />
!gridsvr::<br />
# cfengine will run once an hour, so splay the cluster across 50 minutes<br />
# to ensure the load on the master server is not too high<br />
splaytime = ( 50 )<br />
<br />
gridsvr::<br />
splaytime = ( 5 )<br />
<br />
directories:<br />
grid::<br />
# We need to create the locations for files to be copied into - copy runs before shellcommands<br />
/opt/glite/yaim mode=0700 owner=root group=root<br />
/opt/glite/yaim/etc mode=0755 owner=root group=root<br />
/opt/glite/yaim/functions/local mode=0755 owner=root group=root<br />
/etc/grid-security mode=0755 owner=root group=root<br />
<br />
scientific_sl_3::<br />
/usr/java/$(java) mode=0755 owner=root group=root<br />
<br />
torque::<br />
/var/spool/pbs/mom_priv mode=0755 owner=root group=root<br />
/gridstorage mode=0755 owner=root group=root<br />
/home mode=0755 owner=root group=root<br />
<br />
links: <br />
grid.scientific_sl_3::<br />
# In YAIM we give java location as /usr/java/current, and link here<br />
# (It would be much better if grid stuff just used /etc/java.conf or JAVA_HOME, *sigh*)<br />
/usr/java/current -> /usr/java/j2sdk1.4.2_12<br />
<br />
torque::<br />
/gridstorage/exptsw -> /grid/exp_soft<br />
<br />
packages:<br />
any::<br />
ganglia-gmond action=install elsedefine=newgmon<br />
<br />
grid::<br />
lcg-CA action=install version=1.10<br />
# N.B. note that runyaim happens when the yaim package is first installed<br />
glite-yaim action=install elsedefine=runyaim<br />
<br />
worker::<br />
# Worker node meta package<br />
glite-WN action=install<br />
<br />
torque|ui::<br />
# Packages requested by VOs<br />
gcc action=install<br />
gcc-ssa action=install<br />
gcc-g77 action=install<br />
gcc-g77-ssa action=install<br />
zsh action=install<br />
zlib-devel action=install<br />
compat-libstdc++ action=install<br />
<br />
tidy:<br />
any::<br />
# Make sure this is > max wallclock for the batch system!<br />
/tmp pattern=* age=12 recurse=inf<br />
<br />
copy:<br />
# Master server is exempt from default files<br />
any.!svr031::<br />
# Root's environment<br />
$(skel)/common/root/.bash_profile mode=0644 dest=/root/.bash_profile type=sum<br />
$(skel)/common/root/.bashrc mode=0644 dest=/root/.bashrc type=sum<br />
$(skel)/common/root/.ssh/authorized_keys mode=0644 dest=/root/.ssh/authorized_keys type=sum<br />
# Security for servers and ssh<br />
$(skel)/common/etc/ssh/ssh_known_hosts mode=644 dest=/etc/ssh/ssh_known_hosts type=sum<br />
$(skel)/common/etc/ssh/ssh_config mode=644 dest=/etc/ssh/ssh_config define=newssh type=sum<br />
$(skel)/common/etc/ssh/sshd_config mode=600 dest=/etc/ssh/sshd_config define=newssh type=sum<br />
# Time, time, time!<br />
$(skel)/common/etc/ntp.conf mode=644 dest=/etc/ntp.conf define=newntp type=sum<br />
$(skel)/common/etc/ntp/step-tickers mode=644 dest=/etc/ntp/step-tickers define=newntp type=sum<br />
# Environment for interactive shells (and jobs)<br />
$(skel)/common/etc/profile.d/proxy.csh mode=644 dest=/etc/profile.d/proxy.csh type=sum<br />
$(skel)/common/etc/profile.d/proxy.sh mode=644 dest=/etc/profile.d/proxy.sh type=sum<br />
$(skel)/common/etc/profile.d/tmpdir.csh mode=644 dest=/etc/profile.d/tmpdir.csh type=sum<br />
$(skel)/common/etc/profile.d/tmpdir.sh mode=644 dest=/etc/profile.d/tmpdir.sh type=sum<br />
# Post boot signaling script<br />
# This is an important part of Glasgow's auto install - it signals to the master server when the first boot<br />
# after kickstart has happened.<br />
$(skel)/common/etc/rc.d/rc.local mode=644 dest=/etc/rc.d/rc.local type=sum<br />
<br />
<br />
grid::<br />
# GridPP VOMS + YAIM setup for workers<br />
$(skel)/grid/etc/grid-security/vomsdir/voms.gridpp.ac.uk mode=0644 dest=/etc/grid-security/vomsdir/voms.gridpp.ac.uk type=sum<br />
$(skel)/yaim/site-info.def mode=600 dest=/opt/glite/yaim/etc/site-info.def type=sum<br />
$(skel)/yaim/groups.conf mode=644 dest=/opt/glite/yaim/etc/groups.conf type=sum<br />
$(skel)/yaim/users.conf mode=644 dest=/opt/glite/yaim/etc/users.conf type=sum<br />
# We don't let YAIM do users - so override to a blank function<br />
$(skel)/yaim/local/config_users mode=644 dest=/opt/glite/yaim/functions/local/config_users type=sum<br />
<br />
torque::<br />
$(skel)/torque/shosts.equiv mode=644 dest=/etc/ssh/shosts.equiv type=sum<br />
<br />
torque|ui|dpm|disksvr::<br />
# On torque hosts (and the ui) distribute the shadow and password files to avoid problems with account locking, etc.<br />
# DPM and disk servers need this for gridftp<br />
$(skel)/torque/passwd mode=644 dest=/etc/passwd define=localpoolaccounts type=sum<br />
$(skel)/torque/shadow mode=400 dest=/etc/shadow type=sum<br />
$(skel)/torque/group mode=644 dest=/etc/group type=sum<br />
<br />
autofs::<br />
$(skel)/autofs/auto.cluster mode=0644 dest=/etc/auto.cluster define=newautofs type=sum<br />
$(skel)/autofs/auto.grid mode=0644 dest=/etc/auto.grid define=newautofs type=sum<br />
$(skel)/autofs/auto.master mode=0644 dest=/etc/auto.master define=newautofs type=sum<br />
<br />
gridsvr::<br />
$(skel)/gridsvr/etc/gmond.conf mode=0644 dest=/etc/gmond.conf define=newgmon type=sum<br />
<br />
worker::<br />
$(skel)/worker/etc/gmond.conf mode=0644 dest=/etc/gmond.conf define=newgmon type=sum<br />
# Worker nodes need to route directly to grid and disk servers even when their public IPs are given<br />
$(skel)/worker/etc/sysconfig/network-scripts/route-eth1 mode=0644 dest=/etc/sysconfig/network-scripts/route-eth1 define=needroute type=sum<br />
<br />
<br />
shellcommands:<br />
newgmon::<br />
"/sbin/service gmond restart" umask=022<br />
<br />
newssh::<br />
"/sbin/service sshd restart" umask=022<br />
<br />
newntp::<br />
"/sbin/service ntpd restart" umask=022<br />
<br />
newautofs::<br />
"/sbin/service autofs restart" umask=022<br />
<br />
newtorque::<br />
"/sbin/service pbs_server restart" umask=022<br />
<br />
newmaui::<br />
"/sbin/service maui restart" umask=022<br />
<br />
newhttp::<br />
"/sbin/service httpd restart" umask=022<br />
<br />
localpoolaccounts.!ui::<br />
"/var/cfengine/inputs/scripts/local_pool_accounts /etc/passwd" umask=022<br />
<br />
worker.needroute::<br />
"/sbin/ip route add 130.209.239.16/28 dev eth1" umask=022<br />
"/sbin/ip route add 130.209.239.32/28 dev eth1" umask=022<br />
<br />
worker.runyaim::<br />
# Only define startmom if this looks ok, otherwise withdraw from the batch system<br />
"/opt/glite/yaim/scripts/configure_node /opt/glite/yaim/etc/site-info.def WN" umask=022 define=startmom elsedefine=stopmom<br />
<br />
worker.startmom::<br />
"/sbin/chkconfig pbs_mom on" umask=022<br />
"/sbin/service pbs_mom restart" umask=022<br />
<br />
worker.stopmom::<br />
"/sbin/chkconfig pbs_mom off" umask=022<br />
"/sbin/service pbs_mom stop" umask=022<br />
<br />
</pre><br />
<br />
<br />
[[Category:cfengine]] [[Category: ScotGrid]]</div>Michael kenyonhttps://www.gridpp.ac.uk/wiki/Glasgow_SC4Glasgow SC42008-01-25T13:25:11Z<p>Michael kenyon: </p>
<hr />
<div>This page is a logbook for [http://www.scotgrid.ac.uk/wiki Glasgow] in [[Service Challenge 4]]<br />
<br />
===Transfer Tests===<br />
<br />
Preamble: Initial tests of transfers between RAL and Glasgow revealed a serious problem transfering data between dCache and DPM. Rates were dire, ~2Mb/s.<br />
<br />
Transfer rates from DPM (Edinburgh) to DPM (Glasgow) achieved 200Mb/s without tuning. Transfering back to Edinburgh was at a lower write speed (~80Mb/s), probably as their pool has filesystems nfs mounted (poor write performance).<br />
<br />
====2005-12-22====<br />
<br />
Set se2 pool to RDONLY. pool1 and pool2 writable. 5 files on the channel. Noticed that 4 files would go to pool1, 1 to pool2 - so pool selection algorithm obviously works on a filesystem, rather than server, basis.<br />
<br />
Experimenting with forcing transfers to one pool, or allowing them to float:<br />
<br />
{|border="1",cellpadding="1"<br />
|+Glasgow SC4 File Transfer Tests<br />
<br />
|-style="background:#7C8AAF;color:white"<br />
! Writable Pools<br />
! Files<br />
! Streams<br />
! Size<br />
! Number<br />
! Bandwidth<br />
! Notes<br />
<br />
|-<br />
| pool1(5 fs), pool2<br />
| 5<br />
| 4<br />
| 1GB<br />
| 20<br />
| 220<br />
| 16 Transfers went to pool1, 4 to pool2<br />
<br />
|-<br />
| pool1(5 fs)<br />
| 5<br />
| 4<br />
| 1GB<br />
| 20<br />
| 212<br />
|<br />
<br />
|-<br />
| pool2<br />
| 5<br />
| 4<br />
| 1GB<br />
| 20<br />
| 196<br />
|<br />
<br />
|-<br />
| pool1(1 fs), pool2<br />
| 5<br />
| 4<br />
| 1GB<br />
| 20<br />
| 216<br />
| 10 transfers to each pool<br />
<br />
|-<br />
| pool1(1 fs), pool2<br />
| 5<br />
| 1<br />
| 1GB<br />
| 20<br />
| 203<br />
| 10 transfers to each pool<br />
<br />
|-<br />
| pool1(1 fs), pool2<br />
| 5<br />
| 10<br />
| 1GB<br />
| 20<br />
| 213<br />
| 10 transfers to each pool<br />
<br />
|}<br />
<br />
[[Image:Ed2gla_var_pools_20051222.png]]<br />
<br />
Differences in transfer rates are minor, and probably within error bars. Suspect transfers are being limited by network or by Edinburgh output data rate. No great effect from multiple streams.<br />
<br />
====2005-12-24====<br />
<br />
On basis of above tests, initiated a 1TB transfer from Ed DPM to Gla DPM. 10 simultaneous files, 4 streams per file:<br />
<br />
<pre><br />
Transfer Bandwidth Report:<br />
1000/1000 transfered in 35683.9609549 seconds<br />
1e+12 bytes transfered.<br />
Bandwidth: 224.190358523Mb/s<br />
</pre><br />
<br />
Steady as she goes, basically.<br />
<br />
====2006-01-09====<br />
<br />
'''1TB Upload to RAL'''<br />
<br />
Was seeing really good rates to RAL when doing <tt>lcg-rep</tt> testing, so triggered a 1TB transfer overnight, using 5 streams. Rate was an excellent 331Mb/s.<br />
<br />
<pre><br />
Transfer Bandwidth Report:<br />
1000/1000 transferred in 24165.1441939 seconds<br />
1e+12 bytes transferred.<br />
Bandwidth: 331.055338872Mb/s<br />
</pre><br />
<br />
I did vary the number of concurrently transfered files, from 3 up to 8, during the transfer, which had no noticable effect on the transfer rate. It did, however, affect the load average (naturally) and, interestingly, the system CPU load on the machine: 45% for 8 concurrent transfers, 25% for 3.<br />
<br />
I also applied the [[SC3 kernel tweaks]] during the transfer, but again there was no noticable effect on the rate. Possibly it will have more effect on the sink than the source?<br />
<br />
[[Image:Gla2ral1TB-2006-01-09.gif]]<br />
<br />
====2006-01-26====<br />
<br />
Tuning pools for incoming transfers seems to be tricky (see [[Glasgow DPM Tuning]]), but settled on having pool1 with two writable filesystems and pool2 with 1. Then 5 files seem to ''almost'' guarantee that there's always a file being written to pool2, without having so many transfer streams as to provoke a [[:Image:Ral2gla_5f_10s_40G_fs_pool1_2_writable_load_crisis.png|load crisis]] on one of the pools. 3 streams also keeps the total number of TCP streams reasonable.<br />
<br />
Initial rate was quite good - ~200Mb/s, but this declined after about 4am, down to as low as ~120Mb/s at 0830. Overall the rate was 166Mb/s:<br />
<br />
<pre><br />
Transfer Bandwidth Report:<br />
998/1000 transferred in 47889.3235981 seconds<br />
998000000000.0 bytes transferred.<br />
Bandwidth: 166.717744168Mb/s<br />
</pre><br />
<br />
[[Image:Ral2gla_5f_3s_1TB_pool1_2_writeable_day_rate.png]]<br />
<br />
The two failures were interesting:<br />
<br />
Transfer: srm://dcache.gridpp.rl.ac.uk:8443/pnfs/gridpp.rl.ac.uk/data/dteam/tfr2tier2/canned1G to srm://se2-gla.scotgrid.ac.uk:8443/dpm/scotgrid.ac.uk/home/dteam/pytest/tfr000-file00171<br />
Size: 1000000000<br />
FTS Reason: No such active transfer - RAL-GLAtransXXo0MOgp<br />
<br />
Transfer: srm://dcache.gridpp.rl.ac.uk:8443/pnfs/gridpp.rl.ac.uk/data/dteam/tfr2tier2/canned1G to srm://se2-gla.scotgrid.ac.uk:8443/dpm/scotgrid.ac.uk/home/dteam/pytest/tfr000-file00219<br />
Size: 1000000000<br />
FTS Reason: Transfer succeeded.<br />
<br />
Haven't seem either of these before. Looking on the DPM both files transferred successfully, so I suspect that this was an error in FTS itself - should check the logs.<br />
<br />
Finally, write rates were very patchy - indicating that the balancing of incoming writes isn't working. Snapshot from the end of the transfer shows that pool2 was not transferring continuously:<br />
<br />
[[Image:Ral2gla_5f_3s_1TB_pool1_2_writeable_end.png]]<br />
<br />
Rates to pool1 and pool2 are out of sync, so I think the network may have become the limiting factor at this point.<br />
<br />
Work to be done on i/o rates to our pools, I think.<br />
<br />
===2006-10-27===<br />
<br />
====Inbound test from Edinburgh to Glasgow====<br />
<br />
Started a 24 hour test from Edinburgh to Glasgow at ~1600 (needed Matt to setup new FTS channels for the site).<br />
<br />
Rate was, as excpected, rather dissapointing. Started at ~200Mb/s, rising to ~250Mb/s as, presumably, the traffic calmed on the universtity's WAN and the Janet backbone.<br />
<br />
However, this is still way below what we know that hardware is capable of: 3 disk servers alone managed to hit 800Mb/s+ when sinking data from a source connected directly to their switches. During the write test we had 6 disk servers available and DPM used them all.<br />
<br />
====Oubound test from Glasgow to Edinburgh====<br />
<br />
Started up the outbound test after seeding in files from Edinburgh (at the same rate achieved for the inbound test).<br />
<br />
Rate was even worse than the inbound rate! Struggled up to 80Mb/s. Ganglia clearly showed the load spread nicely over all the disk servers, so why was the rate so bad.<br />
<br />
Then on Sunday morning, between ~0600 and 0920 the rate dropped to zero, with all files failing. The FTS server said:<br />
<br />
Reason: Failed on SRM get: Cannot Contact SRM Service. Error in srm__ping: SOAP-ENV:Client - CGSI-gSOAP: Could not open connection !<br />
<br />
but it was not clear whose SRM was down.<br />
<br />
====Action Plan====<br />
<br />
# iperf tests from off campus to old production site and to new disk servers<br />
# Setup a test DPM using svr023 and a couple of the spare disk servers<br />
## Test interally<br />
## Experiment with stack settings on external tests<br />
## Use SL3?<br />
<br />
[[Image:Glasgow-tfr-tests-2006-10.png]]<br />
<br />
[[Category: ScotGrid]] [[Category: Service Challenge 4]]</div>Michael kenyonhttps://www.gridpp.ac.uk/wiki/2007-Q2_Transfer_Tests2007-Q2 Transfer Tests2008-01-24T15:18:50Z<p>Michael kenyon: </p>
<hr />
<div>Following on from the [[2007-Q1 Transfer Tests]], we have discovered that the glite-transfer-channel settings appear to be ''very'' conservative. ie, increasing the glasgow ones we have achieved FTS rates of '''~700Mb/s'''<br />
<br />
<br />
{|border="1"<br />
|+ Transfer Test Results - April 2007. Source 1st Column.<br />
!colspan=4 | [[ScotGrid]]<br />
|-<br />
! dest-> !! Glasgow !! Edinburgh !! Durham<br />
|-<br />
! [[RAL Tier1]]<br />
| || || <br />
|-<br />
! [http://www.scotgrid.ac.uk/wiki Glasgow]<br />
| --- || 27-Apr 100G in 33 mins (400Mb/s) || <br />
|-<br />
! [[Edinburgh]]<br />
| 27-Apr 100G in 11 mins (1189Mb/s) || --- || 27-Apr 85G in 1:24 (134Mb/s)<br />
|-<br />
! [[Durham]] <br />
| || || ---<br />
|-<br />
! [[RAL Tier2]]<br />
| || || <br />
|-<br />
!colspan=5 | [[NorthGrid]]<br />
|-<br />
! dest-> !! Lancaster !! Liverpool !! Manchester !! Sheffield<br />
|-<br />
! [[RAL Tier1]] || || || || <br />
|-<br />
! [[Lancaster]] || --- || || ||<br />
|-<br />
! [[Liverpool]]|| || --- || || <br />
|-<br />
![[Manchester]]|| || || --- ||<br />
|-<br />
! [[Sheffield]]|| || || || ---<br />
|-<br />
!colspan=7 | [[SouthGrid]]<br />
|-<br />
! dest-> !! Birmingham !! Bristol !! Cambridge !! Oxford !! Warwick !! RAL Tier2<br />
|-<br />
![[RAL Tier1]] || || || || || ||<br />
|-<br />
![[Birmingham]] || --- || || || || ||<br />
|-<br />
![[Bristol]] || || --- || || || ||<br />
|-<br />
![[Cambridge]] || || || --- || || ||<br />
|-<br />
![[Oxford]] || || || || --- || ||<br />
|-<br />
![[Warwick]] || || || || || ---||<br />
|-<br />
![[RAL_Tier2]] || || || || || || ---<br />
|-<br />
!colspan=8 | [[London Tier2]]<br />
|-<br />
! dest-> !! Brunel !! UCL-HEP !! UCL-CENTRAL !! IC-HEP !! IC-LeSC !! QMUL !! RHUL<br />
|-<br />
![[RAL Tier1]] || || || || || || ||<br />
|-<br />
![[Brunel]] || --- || || || || || ||<br />
|-<br />
![[UCL-HEP]] || || --- || || || || ||<br />
|-<br />
![[UCL-CENTRAL]] || || || --- || || || ||<br />
|-<br />
![[IC-HEP]] || || || || --- || || ||<br />
|-<br />
![[IC-LeSC]] || || || || || --- || ||<br />
|-<br />
![[QMUL]] || || || || || || --- ||<br />
|-<br />
![[RHUL]] || || || || || || || ---<br />
|}<br />
<br />
<br />
[[Category:File Transfer]]<br />
[[Category:Transfer Test Script]]</div>Michael kenyonhttps://www.gridpp.ac.uk/wiki/Category:YPFCategory:YPF2008-01-24T15:17:59Z<p>Michael kenyon: </p>
<hr />
<div>This category gathers pages from the '''YAIM People's Front''', the [http://www.scotgrid.ac.uk/wiki Glasgow] cluster fabric system.<br />
<br />
''Are you the'' People's Front of YAIM ''?''<br><br />
''People's Front of YAIM? We're the YAIM People's Front!''<br><br />
''Yeah, we really hate the Romans...''<br><br />
''What ever happened to the YAIM Popular Front?''<br><br />
''He's over there.''<br><br />
...<br><br />
'''''Splitter!'''''<br />
<br />
(YPF is written in [http://www.python.org/ python], hence takes a name inspired by [http://en.wikipedia.org/wiki/Monty_Python Monty Python], in this case from the [http://en.wikipedia.org/wiki/Monty_Python%27s_Life_of_Brian Life of Brian].)</div>Michael kenyonhttps://www.gridpp.ac.uk/wiki/2007-Q1_Transfer_Tests2007-Q1 Transfer Tests2008-01-24T15:16:55Z<p>Michael kenyon: </p>
<hr />
<div>As part of the LCG Dress Rehersals it's time to review the site readiness and capacity with a new round of [[Service_Challenge_Transfer_Tests|transfer tests]], similar to those performed as part of [[:Category:Service Challenge 4|SC4]]<br />
<br />
== Overview ==<br />
The [[Tier 2 SC4 Milestones|Milestones]] set last time for [[Tier 2 SC4 Milestones#Data Transfers|Data Transfers]] was<br />
* Tier-2 Transfers (End of Summer 2006) -Target rate 250Mb/s, subject to external conditions <br />
* T1 to T2 - Target rate 300-500Mb/s <br />
* Inter T2 Transfers - Target rate 100 Mb/s<br />
<br />
== Milestones ==<br />
For this round wewill be using the targets of:<br />
* T1 to T2 - Target rate '''300Mb/s or better'''<br />
* Intra T2 - Target rate '''200Mb/s or better reading / writing.'''<br />
<br />
== Timetable ==<br />
=== dteam===<br />
* Select at least 2 reference sites per T2 for initial T1 to T2 tests. <br />
** Scotgrid - Gla(dpm) Ed(dCache)<br />
** Northgrid - Lancs(dCache) Shef(dpm)<br />
** Southgrid - ox(dpm) bir(dpm) ''dCache in SouthGrid?- RAL''<br />
** LondonT2 - IC-HEP(dCache) QMUL(dpm?)<br />
* test modestly with 50G - checking for major config snafus or v slow rates.<br />
* thrash overnight with 1T (1000*1G)<br />
<br />
Then move onto canned tests within a T2 - round robin so that we get read/write rates between sites.<br />
<br />
<br />
=== experiments ===<br />
Clashing testing from experiments (potentially)<br />
<br />
== Results ==<br />
{|border="1"<br />
|+ Transfer Test Results - March 06. Source 1st Column.<br />
!colspan=4 | [[ScotGrid]]<br />
|-<br />
! dest-> !! Glasgow !! Edinburgh !! Durham<br />
|-<br />
! [[RAL Tier1]]<br />
| {{trantest|date=Mar 05|num=50|status=fail}} {{trantest|date=mar 06|num=50|status=done|speed=290|log=transfer-2007-03-06T09-52-38}} {{trantest|date=mar 06|num=114|speed=251|status=done|log=transfer-2007-03-06T14-51-30}} {{trantest|date=mar 06|num=500|speed=90|status=done|log=transfer-2007-03-06T19-06-22}} || {{trantest|date=mar 06|num=50|status=done|speed=263.5|log=transfer-2007-03-06T13-01-03}} {{trantest|date=mar 06|num=557|status=done|speed=270|log=transfer-2007-03-06T14-38-59}} || <br />
|-<br />
! [http://www.scotgrid.ac.uk Glasgow]<br />
| --- || || {{trantest|date=mar 08|num=263|status=done|speed=229|log=transfer-2007-03-08T15-05-50}} {{trantest|date=mar 08|num=1000|status=done|speed=226|log=transfer-2007-03-08T22-34-28}}<br />
|-<br />
! [[Edinburgh]]<br />
| {{trantest|date=mar 08|num=490|status=done|speed=506|log=transfer-2007-03-09T07-18-28}} || --- ||<br />
|-<br />
! [[Durham]] <br />
| || || ---<br />
|-<br />
!colspan=5 | [[NorthGrid]]<br />
|-<br />
! dest-> !! Lancaster !! Liverpool !! Manchester !! Sheffield<br />
|-<br />
! [[RAL Tier1]] || {{trantest|date=mar 08|num=68|status=done|speed=99|log=transfer-2007-03-08T14-42-42}} || || || {{trantest|date=mar 12|status=done|num=10|speed=118|log=transfer-2007-03-12T14-15-29}} {{trantest|date=mar 12|status=done|num=1000|speed=133|log=transfer-2007-03-12T14-36-33}}<br />
|-<br />
! [[Lancaster]] || --- || || ||<br />
|-<br />
! [[Liverpool]]|| || --- || || <br />
|-<br />
![[Manchester]]|| || || --- ||<br />
|-<br />
! [[Sheffield]]|| || || || ---<br />
|-<br />
!colspan=7 | [[SouthGrid]]<br />
|-<br />
! dest-> !! Birmingham !! Bristol !! Cambridge !! Oxford !! Warwick !! RAL Tier2<br />
|-<br />
![[RAL Tier1]] || {{trantest|date=mar 09|num=10|status=failed|log=transfer-2007-03-09T15-05-02}} {{trantest|date=mar 12|num=450|status=cancelled|log=transfer-2007-03-12T22-37-30|speed=116}} {{trantest|date=mar 12|num=527|status=cancelled|log=transfer-2007-03-13T07-57-43|speed=390}} || || || {{trantest|date=mar 09|num=10|status=done|speed=215|log=transfer-2007-03-09T15-19-43}} {{trantest|date=mar 09|num=307|status=canceled|speed=341|log=transfer-2007-03-09T20-12-54}} {{trantest|date=mar 08|num=998|status=done|speed=276|log=transfer-2007-03-09T22-59-07}}|| ||<br />
|-<br />
![[Birmingham]] || --- || || || || ||<br />
|-<br />
![[Bristol]] || || --- || || || ||<br />
|-<br />
![[Cambridge]] || || || --- || || ||<br />
|-<br />
![[Oxford]] || || || || --- || ||<br />
|-<br />
![[Warwick]] || || || || || ---||<br />
|-<br />
![[RAL_Tier2]] || || || || || || ---<br />
|-<br />
!colspan=8 | [[London Tier2]]<br />
|-<br />
! dest-> !! Brunel !! UCL-HEP !! UCL-CENTRAL !! IC-HEP !! IC-LeSC !! QMUL !! RHUL<br />
|-<br />
![[RAL Tier1]] || || || || {{trantest|date=mar 09|num=10|status=done|log=transfer-2007-03-09T15-25-53|speed=644}} {{trantest|date=mar 09|num=384|status=canceled|speed=420|log=transfer-2007-03-09T15-31-41}}|| || {{trantest|date=mar 09|num=10|status=done|log=transfer-2007-03-09T15-26-30|speed=44}}||<br />
|-<br />
![[Brunel]] || --- || || || || || ||<br />
|-<br />
![[UCL-HEP]] || || --- || || || || ||<br />
|-<br />
![[UCL-CENTRAL]] || || || --- || || || ||<br />
|-<br />
![[IC-HEP]] || || || || --- || || ||<br />
|-<br />
![[IC-LeSC]] || || || || || --- || ||<br />
|-<br />
![[QMUL]] || || || || || || --- ||<br />
|-<br />
![[RHUL]] || || || || || || || ---<br />
|}<br />
<br />
<br />
===Issues Discovered===<br />
==== Mar 5 ====<br />
* RAL -> GLA take 1 slow. Only channel file size was set to one.<br />
'''glite-transfer-channel-list RALLCG2-UKISCOTGRIDGLASGOW'''<br />
Number of files: 1, streams: 1<br />
Upped that to 8 with a quick<br />
'''glite-transfer-channel-set -f 8 RALLCG2-UKISCOTGRIDGLASGOW'''<br />
Overall status of that test failed due to <br />
Source: srm://ralsrma.rl.ac.uk:8443/srm/managerv1?SFN=//castor/ads.rl.ac.uk/prod/grid/hep/disk1tape1/dteam/j/jkf/castorTest/1GBcanned016<br />
Destination: srm://svr018.gla.scotgrid.ac.uk:8443/srm/managerv1?SFN=/dpm/gla.scotgrid.ac.uk/home/dteam/aetest10/tfr000-file00016<br />
State: Failed<br />
Retries: 4<br />
Reason: Failed on SRM get: Failed SRM get on httpg://ralsrma.rl.ac.uk:8443/srm/managerv1 ; id=818873759 call, no TURL retrieved for srm://ralsrma.rl.ac.uk//castor/ads.rl.ac.uk/prod/grid/hep/disk1tape1/dteam/j/jkf/castorTest/1GBcanned016<br />
Duration: 0<br />
<br />
Retrying using 10 source files that are known good (and will repeat for other T1-T2 tests)<br />
<br />
<br />
* RAL->GLA Take 2<br />
<pre><br />
transfer 0 (721e56a6-cbc8-11db-9388-dffd0907342b)<br />
50/50 (50000000000.0) transferred. Started at 9:53:55, Done at 10:15:52, Duration = 0:21:57, Bandwidth = 303.622157873Mb/s<br />
<br />
Date of Submission was 6/3/2007<br />
Total number of FTS submissions = 1<br />
50/50 transferred in 1379.30051994 seconds<br />
50000000000.0bytes transferred.<br />
Average Bandwidth:290.002065697Mb/s<br />
</pre><br />
<br />
* RAL -> EDI Take 1<br />
<br />
./filetransfer.py --number=50 --uniform-source-size --ignore-status-error --delete --background \<br />
srm://ralsrma.rl.ac.uk:8443//castor/ads.rl.ac.uk/prod/grid/hep/disk1tape1/dteam/j/jkf/castorTest/1GBcanned00[0:9] \<br />
srm://srm.epcc.ed.ac.uk:8443/pnfs/epcc.ed.ac.uk/data/dteam/atest10/<br />
==== Mar 23 ====<br />
Suspended dteam testing while CMS transfers in progress. Bottleneck at RAL firewall. Awaiting RAL network upgrade.<br />
See [http://gridpp-ops.blogspot.com/2007/03/transfer-slowdown.html Ops Blog Posting]<br />
<br />
=== Channel Capacity ===<br />
Realised that not all ''glite-transfer-channel-list''s were created equal, so drew up a quick table (using this [[User:Andrew_elwell/Scripts#chan.pl|script]]):<br />
{|<br />
!Site !!From<br>RAL !! Star !! To <br>RAL<br />
|-<br />
|UKI-NORTHGRID-SHEF-HEP || 8 || 8 || 1<br />
|-<br />
|UKI-LT2-IC-LESC || 1 || 1 || 1<br />
|-<br />
|UKI-SOUTHGRID-BHAM-HEP || 8 || 5 || 1<br />
|-<br />
|UKI-SOUTHGRID-OX-HEP || 8 || 8 || 1<br />
|-<br />
|UKI-LT2-IC-HEP || 10 || 40 || 10<br />
|-<br />
|UKI-SOUTHGRID-CAM-HEP || 8 || 8 || 1<br />
|-<br />
|UKI-LT2-UCL-HEP || 1 || 8 || 8<br />
|-<br />
|SCOTGRID-EDINBURGH || 8 || 8 || 1<br />
|-<br />
|UKI-LT2-UCL-CENTRAL || 8 || 8 || 1<br />
|-<br />
|UKI-NORTHGRID-MAN-HEP || 1 || 8 || 1<br />
|-<br />
|UKI-SCOTGRID-GLASGOW || 8 || 20 || 10<br />
|-<br />
|UKI-NORTHGRID-LANCS-HEP || 8 || 10 || 12<br />
|-<br />
|UKI-SOUTHGRID-BRIS-HEP || 1 || 8 || 5<br />
|-<br />
|UKI-LT2-BRUNEL || 8 || 8 || 8<br />
|-<br />
|UKI-SCOTGRID-DURHAM || 1 || 8 || 1<br />
|-<br />
|UKI-SOUTHGRID-RALPP || 8 || 8 || 8<br />
|-<br />
|UKI-LT2-RHUL || 5 || 8 || 1<br />
|-<br />
|UKI-LT2-QMUL || 2 || 3 || 1<br />
|-<br />
|UKI-NORTHGRID-LIV-HEP || 1 || 8 || 1<br />
|}<br />
<br />
[[Category:File Transfer]]<br />
[[Category:Transfer Test Script]]</div>Michael kenyonhttps://www.gridpp.ac.uk/wiki/CfengineCfengine2008-01-24T15:05:19Z<p>Michael kenyon: </p>
<hr />
<div>== Overview ==<br />
Cfengine is a cluster configuration management tool that is currently in use by [[Manchester]], [[Lancaster]] and [http://www.scotgrid.ac.uk/wiki Glasgow].<br />
<br />
Some sample configurations will be uploaded once they've been sanitised.<br />
<br />
===Cfengine Guides===<br />
<br />
Some help on specific cfengine issues.<br />
<br />
* [[Cfengine: Getting started]]<br />
* [[Cfengine: Distributing configuration across the cluster]]<br />
* [[Cfengine: Installing packages with cfengine]]<br />
* [[Cfengine: Running scripts with cfengine]]<br />
<br />
A bit more background on some things<br />
<br />
* [[Cfengine: Classes]]<br />
<br />
== Some Recent talks mentioning cfengine ==<br />
[[User:Colin_morey]] mentioned cfengine in his networking and cfengine talks at the recent HEP Sysman conference, he hasn't yet managed to finish the accompanying notes, but the slides are available on application.<br />
* [http://indico.cern.ch/getFile.py/access?contribId=34&sessionId=21&resId=12&materialId=slides&confId=3738 Use of Cfengine for deploying LCG/gLite middleware.]<br />
<br />
==Examples==<br />
<br />
* [http://www.scotgrid.ac.uk/wiki/index.php/Cfengine:_Glasgow_Worker_Nodes Cfengine: Glasgow Worker Nodes]<br />
* [[Cfengine: Testing ssh connectivity to the CE]]<br />
* [[Cfengine: Cleaning Out Scratch Space]]<br />
* [[Cfengine: Installing Xrootd with cfengine]]<br />
<br />
== External links ==<br />
[http://www.cfengine.org Main site]<br />
[http://www.pearcec.com/cfengine_styleguide.html cfengine style guide]<br />
[http://cfwiki.org/cfwiki/index.php/Main_Page cfengine wiki]<br />
<br />
[[Category:GridPP_Deployment]]<br />
[[Category:cfengine]]</div>Michael kenyonhttps://www.gridpp.ac.uk/wiki/Service_Challenge_Transfer_Test_SummaryService Challenge Transfer Test Summary2008-01-24T15:03:33Z<p>Michael kenyon: </p>
<hr />
<div>This table records the best [[Service Challenge Transfer Tests | transfer rates]] achieved at each GridPP Tier 2 site.<br />
<br />
The current [[Tier_2_SC4_Milestones|goals]] are for sites to transfer<br />
<br />
* At ''at least'' 250Mb/s when only reading<br />
* At ''at least'' 250Mb/s when only writing<br />
* At 200Mb/s when simulteneously reading and writing<br />
<br />
<br />
{|border="1",cellpadding="1"<br />
|+GridPP Tier 2 SC4 File Transfer Tests<br />
<br />
|-style="background:#7C8AAF;color:white"<br />
!<br />
!T1->T2<br />
!T2->T1<br />
!<br />
!Inter T2<br />
!Inter T2<br />
!Inter T2<br />
!<br />
<br />
|-style="background:#7C8AAF;color:white"<br />
!Site<br />
!Best<br>Inbound<br>Rate<br><br />
!Best<br>Outbound<br>Rate<br><br />
!Notes<br>T1<->T2<br />
!24hour<br>Average<br>Read<br>Rate<br><br />
!24hour<br>Average<br>Write<br>Rate<br><br />
!24hour<br>Average<br>Read/Write<br>Rate<br><br />
!Notes<br>inter T2<br />
<br />
|-style="background:#7C8AAF;color:white"<br />
!<br />
!Feb/Mar 2006<br />
!Feb/Mar 2006<br />
!<br />
!Sept/Oct 2006<br />
!Sept/Oct 2006<br />
!Sept/Oct 2006<br />
!<br />
<br />
|-style="background:#D6D4E3"<br />
!colspan="8" | [[ScotGrid]]<br />
<br />
|-<br />
|[[Durham]]<br />
|<span style="color:red">193</span><br />
|<span style="color:red">176</span><br />
|Single server with ext3 filesystem for DPM. Outbound currently rate limited by NORMAN to 200Mb/s.<br />
|<span style="color:red">212</span><br />
|<span style="color:red">170</span><br />
|<span style="color:red">225 / 110</span><br />
| At the moment DPM limited to one disk server, combined with the headnode, so we've probably achieved as much as can be done with this hardware. Continue to chip away at the NORMAN problem - but this will take some time.<br />
<br />
|-<br />
|[[Edinburgh]]<br />
|<span style="color:green">276</span><br />
|<span style="color:green">440</span><br />
|[[Edinburgh_dCache_Setup]]<br />
|<span style="color:green">306</span><br />
|<span style="color:green">372</span><br />
| <br />
|Edinburgh was used as a reference site during the inter-T2 24hour transfer tests<br />
<br />
<br />
|-<br />
|[http://www.scotgrid.ac.uk/wiki Glasgow]<br />
|<span style="color:green">414</span><br />
|<span style="color:green">331</span><br />
| Separate DPM headnode with two disk servers using xfs partitions seems to work very well.<br />
|<span style="color:red">66</span><br />
|<span style="color:red">235</span><br />
|<br />
| Glasgow's new cluster capable of very high rates (see [http://scotgrid.blogspot.com/2006/10/glasgow-dpm-has-been-tested-by-jamie.html blog]), but rates achieved from outside are poor (150Mb/s from RAL). Will investigate TCP window sizes, and use iperf to diagnose problems. We have good relations with university level network team. <em>2006-11-21 Update:</em> Suspicion now falling on campus gateway router running out of hardware slots for ACL processing - things fall into software and go really slowly. <br />
<br />
|-style="background:#D6D4E3"<br />
!colspan="8" | [[NorthGrid]]<br />
<br />
|-<br />
|[[Lancaster]]<br />
| <span style="color:green">800</span><br />
| <span style="color:green">500</span><br />
| Best rates over lightpath to [[RAL Tier1]]<br />
|<span style="color:red">100</span><br />
|<span style="color:red">100</span><br />
|<span style="color:red">100</span> / <span style="color:green">550</span><br />
|<br />
<br />
|-<br />
|[[Liverpool]]<br />
|<span style="color:red">88</span><br />
|<span style="color:red">22</span><br />
| Problems with inaccessible gridftp doors<br />
|<span style="color:red">182</span><br />
|<span style="color:red">182</span><br />
|<span style="color:red">70 / 155</span><br />
|<br />
<br />
|-<br />
|[[Manchester]]<br />
|<span style="color:green">320</span><br />
|<span style="color:green">320</span><br />
|[[dCache]] configuration has changed since best rate - should retest<br />
|<span style="color:green">360</span><br />
|<span style="color:green">331</span><br />
|<span style="color:green">271 / 244</span><br />
|<br />
<br />
<br />
<br />
|-<br />
|[[Sheffield]]<br />
|<span style="color:red">144</span><br />
|<span style="color:green">414</span><br />
| Currently all dCache services running on single node acting as the disk server to ~2TB of RAID level-5 disk. Will rerun the inbound test with new ext3 mount options (noatime,nodiratime,data=writeback,commit=60).<br />
|<span style="color:red">190</span><br />
|<span style="color:red">47</span><br />
|<br />
|<br />
<br />
|-style="background:#D6D4E3"<br />
!colspan="8" | [[SouthGrid]]<br />
<br />
|-<br />
|[[Birmingham]]<br />
|<span style="color:green">317</span><br />
|<span style="color:green">461</span><br />
|<br />
|<span style="color:green">320</span><br />
|<span style="color:red">180</span><br />
|<span style="color:red">152 / 155</span><br />
|Performance slightly down and hence dropped as a reference site. MidMAN possible rate capping. Investigations continue<br />
<br />
|-<br />
|[[Bristol]]<br />
|<span style="color:red">117</span><br />
|<span style="color:green">291</span><br />
| DPM writing to a single local 200GB IDE disk, formatted with ext3. Basic tests suggest a maximum write rate of ~170Mbps, before including any file transfer overhead. Will try alternative ext3 mount options, but will have to consider moving to xfs and/or using additional hardware. MoU states that Bristol will provide 2TB storage. SRM may get access to Bristol cluster storage via GPFS.<br />
|<span style="color:red">242</span><br />
|<span style="color:red">216</span><br />
|<br />
|New 1Gb/s link has improved rates, but still have to look into contention from other users on the same network. IS says the 1Gb switch will be upgraded to 10Gb (when campus backbone also upgraded to 10Gb) but no timeframe yet.<br />
<br />
|-<br />
|[[Cambridge]]<br />
|<span style="color:green">293</span><br />
|<span style="color:red">153</span><br />
|Single DPM server using ext3 mounted local partitions. Outbound test will be redone.<br />
|<span style="color:green">310</span><br />
|<span style="color:green">325</span><br />
|<br />
|Good rates ; need to find out what improved.<br />
<br />
|-<br />
|[[Oxford]]<br />
|<span style="color:green">252</span><br />
|<span style="color:green">456</span><br />
|<br />
|<span style="color:red">88</span><br />
|<br />
|<br />
| Oxford performance dramtically dropped after a new campus firewall was installed on 15.8.06. Investigations continue.<br />
<br />
|-<br />
|[[RAL Tier2]]<br />
|<span style="color:green">397</span><br />
|<span style="color:green">388</span><br />
|<br />
|<span style="color:green">372</span><br />
|<span style="color:green">306</span><br />
|<br />
| Ral PPD was used as a reference site during the inter-T2 24hour transfer tests<br />
<br />
|-style="background:#D6D4E3"<br />
!colspan="8" | [[London Tier2]]<br />
<br />
|-<br />
|[[Brunel]]<br />
| <span style="color:red">57</span><br />
|<span style="color:red">59</span><br />
| One headnode, one pool node with xfs, one pool node with jfs.<br />
|<span style="color:red">29</span><br />
|<span style="color:red">27</span><br />
|<span style="color:red">14/14</span><br />
| Single read and writes capped at 30 Mbit/s. Combined read/write capped at 20 Mbit/s. 2007-01-05: Brunel campus increased to 1 Gbit/s with Grid subnet cap increased to 100 Mbit/s.<br />
<br />
|-<br />
|[[IC-HEP]]<br />
|<span style="color:red">80</span><br />
|<span style="color:red">190</span><br />
| Much better rates acheived with srmcp and phedex. Seeing high CPU IO wait on the disk servers when FTS used in urlcopy (3rd party GridFTP) mode. Also, urlcopy does not transfer data directly to pool, but via a GridFTP door, leading to a lot of inter-disk server traffic. Changed STAR-IC FTS channel to use srmCopy (thanks to Matt Hodges) and observed significant boost in the inbound transfer rate and essentially no inter-disk server traffic as data now going directly to pool. Problem with IO wait still present, however. Also, dCache error messages in logs need to be investigated. These appear to be correlated to failures in the file transfers.<br />
|<br />
|<br />
|<br />
|<span><br />
Imperial could not schedule Inter T2 transfer tests due to CMS<br />
transfer tests in Sept/Oct. We have observed<br />
high rates with CMS phedex and FTS transfer tests. Inbound and<br />
outbound transfer tests rates were around (~500 Mb/s) with peaks<br />
of (~800 Mb/s).<br />
</span><br />
<br />
|-<br />
|[[IC-LeSC]]<br />
|<span style="color:red">156</span><br />
|<span style="color:red">95</span><br />
| The outbound test discovered a 100Mb/s bottleneck on site. This was removed before the inbound test was completed. Currently all DPM services running on single node with disk pool on same disk as the machines other filesystems. IC-LeSC investigating the building of [[DPM-on-Solaris|DPM on solaris]]. Will rerun the inbound test with new ext3 mount options (noatime,nodiratime,data=writeback,commit=60). The outbound tests has been re-run since the bottleneck has been resolved and a rate of 217Mb/s was recorded to Edinburgh (using FTS in srmCopy mode). 222Mb/s to Glasgow (using FTS in urlcopy mode). <br />
|<br />
|<br />
|<br />
|<span><br />
DPM under solaris is still under investigation. Therefore default<br />
SE for IC-LeSC is dCache installed at IC-HEP. <br />
<br />
</span><br />
<br />
|-<br />
|[[QMUL]]<br />
|<span style="color:red">118</span><br />
|<span style="color:red">172</span><br />
|The poolfs was improved. The basic idea is as follows.<br />
We could consider two options (both at compile time, but it might be<br />
implemented at run time somehow).<br />
1] poolfs chooses the nodes according to a job characteristics. For example<br />
if a job has got different processes writing on disk,<br />
we try to write all the files on the same machine.<br />
2] poolfs follows a round robin policy. In principle this should<br />
allow several writings in parallel, so improving performances.<br />
We got boundwidth peaks of about 300 Mb/s (from site) and<br />
about 400 Mb/s (to site).<br />
|<span style="color:red">179</span><br />
|<span style="color:red">106</span><br />
|<span style="color:green">241</span> / <span style="color:red">66</span><br />
|<br />
<br />
|-<br />
|[[RHUL]]<br />
|<span style="color:red">59</span><br />
|<span style="color:red">58</span><br />
| Separate head and pool nodes. One pool node deployed, two more waiting for deployment. Using jfs filesystems and some legacy data on nfs. Will drain nfs data when tool is avaliable. Urgently need this tool to drain existing pool node for maintenance too.<br />
|<span style="color:red">18</span><br />
|<span style="color:red">39</span><br />
|<span style="color:red">34 / 31</span><br />
|Rates as expected, limited by 100Mb/s connection to LMN shared with other campus traffic. Upgrade to 1 Gb/s is going ahead shortly.<br />
<br />
|-<br />
|[[UCL-HEP]]<br />
|<span style="color:red">71</span><br />
|<span style="color:red">63</span><br />
| Two pools separated from head node: one node (for dteam, ops, etc..) still uses a 100Mb/s NIC (planned upgrade to Gb/s failed); second pool (for atlas) nfs mounted. Head node is connected via Gb switch to LMN, through shared campus network. New disk server planned (purchase completed) to replace nfs pool (need migration tool). Need to address pool node with 100Mb/s connectivity.<br />
|<span style="color:red">34</span><br />
|<span style="color:red">17</span><br />
|<br />
| Rate drop since March not understood, although in line with what seen at e.g. RHUL or Brunel. Dteam pool limited by 100Mb/s bootleneck at pool interface.<br />
<br />
|-<br />
|[[UCL-CENTRAL]]<br />
|<span style="color:red">90</span><br />
|<span style="color:green">309</span><br />
| Currently using NFS to mount storage onto DPM head node from their disk servers. Is it possible to install the DPM disk pool software directly onto these servers?<br />
|<span style="color:green">281</span><br />
|<span style="color:green">262</span><br />
| <br />
|<br />
<br />
|-<br />
<br />
|}<br />
<br />
[[Category: Service Challenge 4]]</div>Michael kenyonhttps://www.gridpp.ac.uk/wiki/Birmingham_SC4Birmingham SC42008-01-24T15:02:02Z<p>Michael kenyon: </p>
<hr />
<div>==Transfer Tests==<br />
<br />
===2006-01-16===<br />
<br />
====1TB From Glasgow====<br />
<br />
Initial testing showed a pretty good rate to [[Birmingham]] from [http://www.scotgrid.ac.uk/wiki Glasgow]. Triggered the usual 1000x1GB test at 11pm. This was 5 simultaneous files with 5 streams each.<br />
<br />
Unfortunately <tt>glite-transfer-status</tt> threw a new type of wobbly, so the script terminated prematurely. However from the Ganglia plot the transfer took ~7 hours, which is a rate of 317Mb/s.<br />
<br />
[[Image:Gla2bham1TB 2.png]]<br />
<br />
[[Category: Service Challenge 4]] [[Category: SouthGrid]]</div>Michael kenyonhttps://www.gridpp.ac.uk/wiki/Transfer_Test_ChecklistTransfer Test Checklist2008-01-24T15:01:09Z<p>Michael kenyon: </p>
<hr />
<div>To actually do a [[Service Challenge Transfer Tests|transfer test]] check that you have the following:<br />
<br />
==Prerequisites==<br />
<br />
# You have [[Grid Storage|SRM Storage]] at your site.<br />
# You have contacted [[RAL Tier1 File Transfer Service#Service Endpoints|RAL]] to setup an FTS Channel to your site.<br />
# You have your FTS client configured to use the [[RAL Tier1 File Transfer Service#Local Deployment Information|RAL FTS endpoint]].<br />
# Check you can [[RAL_Tier1_File_Transfer_Service#Basic Usage| submit a single transfer]] and that it works.<br />
<br />
''N.B.'' There is a [https://savannah.cern.ch/bugs/?func=detailitem&item_id=13973 known bug] in the LCG 2.6.0 UI repositories (if you have a 2.7.0 UI then it's fine), which can cause your FTS client commands to become corrupted if you try an <tt>apt-get dist-upgrade</tt> on the UI. If this has happend then [http://www.scotgrid.ac.uk/wiki Glasgow] have a 2.7.0 UI, with working FTS client commands on which anyone from GridPP can get access in order to perform file transfers. (Mail [[User:graeme stewart]] your preferred username, an ssh v2 public key and the name of the host you want to login from.) <br />
<br />
The only RPM set on an LCG 2.6 UI known to work is:<br />
<br />
# rpm -qa | grep glite-data-transfer<br />
glite-data-transfer-cli-1.3.4-1<br />
glite-data-transfer-api-c-2.9.0-1<br />
glite-data-transfer-interface-2.9.0-1<br />
<br />
==Preparations==<br />
<br />
# You should have answered [[GridPP Answers to 10 Easy Network Questions]] for your site. This means you should know who your local network contacts are.<br />
# Get in touch with your local network contacts to explain [[Service Challenge Transfer Tests|what we're trying to do and why]].<br />
## It helps if you offer to try the transfer at a quiet time on the network (evenings/weekends).<br />
## You do want to try a smaller test first anyway, which could be used to understand the impact of the transfer tests.<br />
# Do a smaller test, say 10-50GB. This will give you an idea of the bandwidth available on the network between you and RAL, and how long it will take to do a TB transfer.<br />
<br />
==Initiating a Transfer==<br />
<br />
# Before you submit a large (1TB) transfer make sure that all affected and interested parties know about it (RAL, your local institution, [[User:Graeme stewart|Graeme Stewart]] (test coordinator)). Enter your test date into the [[Service Challenge Transfer Tests]] table.<br />
# [[User:Graeme stewart|Graeme Stewart]] has written a [http://www.physics.gla.ac.uk/~graeme/scripts/index.shtml#filetransfer script] to trigger and monitor transfers, it's very robust and provides lots of useful information on the transfer status (more features are being added too). Here's a [[Transfer Test Python Script HOWTO]]. <br />
#Previously [[User:Chris brew|Chris Brew]] wrote a [[RALPP_Local_SC4_Preparations#Test_Script|shell script]] which can be used for transfer tests. Jamie Ferguson has made modifications to this script and has written a [http://ppewww.ph.gla.ac.uk/~fergusjk/ftsIndex.html perl wrapper] for it, which makes it a bit nicer.<br />
<br />
==Monitoring RAL End==<br />
<br />
To keep an eye on the data coming out of [[RAL Tier1|RAL]] there are numerous things to look at.<br />
# [http://ganglia.gridpp.rl.ac.uk/?c=SC&m=%5Bnone%5D&r=day&s=descending Ganglia plots of disk servers]. These show the data in and out of each of the disk servers.<br />
# [http://netstats.rl.ac.uk/slan.php?if=fwrtra RAL network stats]. The plots shows the data rates into and out of RAL on the production network for the whole RAL site.<br />
# [http://ganglia.gridpp.rl.ac.uk/cgi-bin/ganglia-fts/fts-page.pl?r=day FTS Metrics]. In particular this shows instantaneous values for the number of started/active/done/failed transfers on the [[RAL Tier1 File Transfer Service|RAL FTS]]. Also included are estimates of the current rates of data transfer for each channel, but bear in mind that these estimates may be unreliable, especially for large transfers and plots over small time scales; this is a due to the limited information that can be extracted from the FTS logs.<br />
<br />
==Postscript==<br />
<br />
# Summarise the results in the [[Service Challenge Transfer Tests]] table.<br />
<br />
==Appendix==<br />
<br />
===Canned Data Locations===<br />
For the inter-tier 2 tests:<br><br />
List of [[Edinburgh_SURLs|Edinburgh SURLs]]<br><br />
List of [[Birmingham_SURLs|Birminham SURLs]]<br><br />
List of [[RALT2_SURLs|RAL T2 SURLs]]<br><br />
<br />
Canned files at RAL tier 1:<br />
<br />
srm://dcache.gridpp.rl.ac.uk:8443/pnfs/gridpp.rl.ac.uk/data/dteam/tfr2tier2/canned100k<br />
srm://dcache.gridpp.rl.ac.uk:8443/pnfs/gridpp.rl.ac.uk/data/dteam/tfr2tier2/canned1M<br />
srm://dcache.gridpp.rl.ac.uk:8443/pnfs/gridpp.rl.ac.uk/data/dteam/tfr2tier2/canned10M<br />
srm://dcache.gridpp.rl.ac.uk:8443/pnfs/gridpp.rl.ac.uk/data/dteam/tfr2tier2/canned100M<br />
srm://dcache.gridpp.rl.ac.uk:8443/pnfs/gridpp.rl.ac.uk/data/dteam/tfr2tier2/canned1G<br />
srm://dcache.gridpp.rl.ac.uk:8443/pnfs/gridpp.rl.ac.uk/data/dteam/tfr2tier2/canned2G<br />
srm://dcache.gridpp.rl.ac.uk:8443/pnfs/gridpp.rl.ac.uk/data/dteam/tfr2tier2/canned5G<br />
<br />
Canned files at Glasgow:<br />
<br />
srm://se2-gla.scotgrid.ac.uk:8443/dpm/scotgrid.ac.uk/home/dteam/tfr2tier2/canned100k<br />
srm://se2-gla.scotgrid.ac.uk:8443/dpm/scotgrid.ac.uk/home/dteam/tfr2tier2/canned1M<br />
srm://se2-gla.scotgrid.ac.uk:8443/dpm/scotgrid.ac.uk/home/dteam/tfr2tier2/canned10M<br />
srm://se2-gla.scotgrid.ac.uk:8443/dpm/scotgrid.ac.uk/home/dteam/tfr2tier2/canned100M<br />
srm://se2-gla.scotgrid.ac.uk:8443/dpm/scotgrid.ac.uk/home/dteam/tfr2tier2/canned1G<br />
<br />
Multiple transfers of the same source file will speed up the data source SE a little, as the file data will get held in cache buffers on the pool nodes, but this will not significantly affect the results.<br />
<br />
These files use the ''correct decimal interpretation'' of <tt>k</tt>, <tt>M</tt> and <tt>G</tt>, i.e. they are 100000, 1000000, 10000000, 100000000, ... bytes ''exactly''. Most of the GNU tools, such as <tt>dd</tt> and <tt>ls</tt> use the ''binary bytes'' <tt>2^n</tt> interpretation, i.e., <tt>1K=1024</tt> bytes. (Really this is a <tt>KiB</tt>, or [http://en.wikipedia.org/wiki/Kibibyte ''kibibyte''].) I felt that decimal values make it easier to calculate bandwidths properly.<br />
<br />
''Post Script:'' In fact the [http://www.physics.gla.ac.uk/~graeme/scripts/ python transfer script] uses an <tt>srmGetMetadata</tt> call to calculate the size of all the source files, so this really isn't so important anymore.<br />
<br />
===Approximate 1TB Transfer Times===<br />
<br />
{|border="1" cellpadding="1" align="center"<br />
|-style="background:#7C8AAF;color:white"<br />
! Network Bandwidth (Mb/s) || Approximate Transfer Time per TB (hours)<br />
|-<br />
| 50 || 44<br />
|-<br />
| 100 || 22<br />
|-<br />
| 200 || 11<br />
|-<br />
| 400 || 5.5<br />
|- <br />
| 1000 || 2.2<br />
|-}<br />
<br />
<br />
<br />
<br />
[[Category: File Transfer]] [[Category: Service Challenge 4]]</div>Michael kenyonhttps://www.gridpp.ac.uk/wiki/Service_Challenge_Transfer_TestsService Challenge Transfer Tests2008-01-24T15:00:00Z<p>Michael kenyon: </p>
<hr />
<div>As part of the [[Tier 2 SC4 Milestones]], GridPP will perform transfer tests using the [[gLite File Transfer Service|File Transfer Service]], between the [[RAL Tier1]] and each Tier 2 (T1<->T2) and Tier 2 to Tier 2 (T2<->T2).<br />
<br />
These tests aim to:<br />
<br />
# To demonstrate connectivity at high bandwidth from the T1 to the T2. The target rate is 300-500Mb/s.<br />
# To demonstrate connectivity at high bandwidth from each T2 to another T2. The target rate is >250Mb/s.<br />
# To demonstrate that thess bandwidth can be maintained over prolonged periods of intense file transfer (this is an end to end test of components from FTS->SRMs->Network).<br />
<br />
==Summary==<br />
<br />
An abbeviated summary of the best results in these transfer tests is in [[Service Challenge Transfer Test Summary]]<br />
<br />
Testing that has contined in the runup to LHC startup is described at:<br />
* [[2007-Q1_Transfer_Tests]]<br />
* [[2007-Q2_Transfer_Tests]]<br />
* [[2007-Q3_Transfer_Tests]]<br />
<br />
==Transfer Tests==<br />
<br />
=== Transfer Test Coordinator ===<br />
<br />
After a brief period away, it's, once again, the delightful... <strike>[[User:Graeme stewart|Graeme Stewart]]</strike> [[User:Andrew elwell|Andrew Elwell]]!<br />
<br />
=== Doing A Transfer Test ===<br />
<br />
* We've a [[Transfer Test Checklist]], which tells you what you need in place and walks through the mechanics of doing a transfer.<br />
<br />
===Script and Settings===<br />
====Filetransfer.py====<br />
Ensure the latest version of the [http://www.physics.gla.ac.uk/~graeme/scripts/#filetransfer filetransfer script] is installed on the machine that the FTS transfers shall be submitted from.<br><br><br />
The options/arguments that should be used (the example is from Edinburgh to Glasgow) are...<br />
--duration => time in minutes (or hours and minutes) that the test should run for.<br />
--delete => the files are to be deleted from the destination after each transfer has taken place.<br />
--background => After the first submission run all the transfers in the background<br />
--logfile => Log transfers to this file (can be tailed to monitor progress)<br />
--uniform-source-size => the metadata (size) of just one file is checked and not all of the source SURLs.<br />
Checking 100 would take a long time and is unnecessary if all source files are the same size.<br />
srm://srm.epcc.ed.ac.uk:8443/pnfs/epcc.ed.ac.uk/data/dteam/canned/tfr000-file00[000:099] is the list of source SURLs.<br />
srm://se2-gla.scotgrid.ac.uk:8443/dpm/scotgrid.ac.uk/home/dteam/T2toT2test is the destination endpoint. It is<br />
formulated from an srm hostname and directory which exists/shall be created in the destination srm namespace.<br />
A typical example (the example is from Edinburgh to Glasgow) would be...<br />
filetransfer.py --duration=24:00 --background --logfile=ed2gla --delete --uniform-source-size \<br />
srm://srm.epcc.ed.ac.uk:8443/pnfs/epcc.ed.ac.uk/data/dteam/canned/tfr000-file00[000:099] \<br />
srm://se2-gla.scotgrid.ac.uk:8443/dpm/scotgrid.ac.uk/home/dteam/24hrInterT2-dest/`date +%Y%m%d_%H%M%S`<br />
<br />
====Files/Streams per channel====<br />
Recommended values are<br><br />
Number of files = 8<br><br />
Number of Streams = 1<br><br />
These can be set as below (obviously using your own FTS channel names)<br />
$ glite-transfer-channel-set -f 8 STAR-SCOTGRIDGLA<br />
$ glite-transfer-channel-set -T 1 STAR-SCOTGRIDGLA<br />
and can be checked thus<br />
$ glite-transfer-channel-list STAR-SCOTGRIDGLA |grep files<br />
Number of files: 8, streams: 1<br />
<br />
=== Table of Responsibilities ===<br />
<br />
{|border="1",cellpadding="1"<br />
|+GridPP Tier 2 (ref) - Tier 2 (non-ref) SC4 File Transfer Tests September-December 06<br />
<br />
|-style="background:#7C8AAF;color:white"<br />
!Site<br />
!Times<br />
!Person Responsible<br>for Submitting Tests<br />
!email<br />
<br />
|-<br />
|-style="background:#D6D4E3"<br />
!colspan="2" | [[ScotGrid]]<br />
|Graeme Stewart<br />
|g.stewart@physics.gla.ac.uk<br />
<br />
|-<br />
|[[Durham]]<br />
|<br />
|Mark Nelson<br />
|Mark.Nelson@durham.ac.uk<br />
<br />
|-<br />
|[[Edinburgh]]<br />
|<br />
|Greig Cowan<br />
|G.Cowan@ed.ac.uk<br />
<br />
|-<br />
|[http://www.scotgrid.ac.uk/wiki Glasgow]<br />
|<br />
|[[User:Graeme stewart]]<br />
|g.stewart@physics.gla.ac.uk<br />
<br />
|-<br />
|-style="background:#D6D4E3"<br />
!colspan="2" | [[NorthGrid]]<br />
|Alessandra forti<br />
|Alessandra.Forti@manchester.ac.uk<br />
<br />
|-<br />
|[[Lancaster]]<br />
|<br />
|Brian Davies<br />
|b.g.davies@lancaster.ac.uk<br />
<br />
|-<br />
|[[Liverpool]]<br />
|<br />
|Paul Trepka<br />
|pat@hep.ph.liv.ac.uk<br />
<br />
|-<br />
|[[Manchester]]<br />
|<br />
|Alessandra forti<br />
|Alessandra.Forti@manchester.ac.uk<br />
<br />
|-<br />
|[[Sheffield]]<br />
|<br />
|<br />
|d.g.wilson@sheffield.ac.uk<br />
<br />
|-<br />
|-style="background:#D6D4E3"<br />
!colspan="2" | [[SouthGrid]]<br />
|Pete Gronbech<br />
|gronbech@physics.ox.ac.uk<br />
<br />
|-<br />
|[[Birmingham]]<br />
|<br />
|Yves Coppens<br />
|yrc@hep.ph.bham.ac.uk<br />
<br />
|-<br />
|[[Bristol]]<br />
|<br />
|Yves Coppens<br>Winnie Lacesso<br />
|lcg-site-admin@phy.bris.ac.uk<br />
<br />
|-<br />
|[[Cambridge]]<br />
|<br />
|Santanu Das<br />
|santanu@hep.phy.cam.ac.uk<br />
<br />
|-<br />
|[[Oxford]]<br />
|<br />
|Pete Gronbech<br />
|gronbech@physics.ox.ac.uk<br />
<br />
|-<br />
|[[RAL_Tier2]]<br />
|<br />
|Chris Brew<br />
|C.A.J.Brew@RL.AC.UK<br />
<br />
|-<br />
|-style="background:#D6D4E3"<br />
!colspan="2" | [[London Tier2]]<br />
|Olivier Van-der-aa<br />
|o.van-der-aa@IMPERIAL.AC.UK<br />
<br />
|-<br />
|[[Brunel]]<br />
|<br />
|Duncan Rand<br />
|duncan.rand@brunel.ac.uk<br />
<br />
<br />
|-<br />
|[[IC-HEP]]<br />
|<br />
|Mona Aggarwal<br />
|m.aggarwal@imperial.ac.uk<br />
<br />
<br />
|-<br />
|[[IC-LeSC]]<br />
|<br />
|Mona Aggarwal<br />
|m.aggarwal@imperial.ac.uk<br />
<br />
|-<br />
|[[QMUL]]<br />
|<br />
|Giuseppe Mazza<br />
|g.mazza@qmul.ac.uk<br />
<br />
|-<br />
|[[RHUL]]<br />
|<br />
| Duncan Rand <br />
| duncan.rand@brunel.ac.uk<br />
<br />
|-<br />
|[[UCL-CENTRAL]]<br />
|<br />
|Alice Fage<br>William Hay<br>Austin Chamberlain<br />
|is-lcg-support@ucl.ac.uk<br />
<br />
|-<br />
|[[UCL-HEP]]<br />
|<br />
|Gianfranco Sciacca<br />
|Gianfranco Sciacca <gs@hep.ucl.ac.uk><br />
<br />
|-<br />
|}<br />
<br />
=== Transfer Tests Endpoints ===<br />
<br />
[[Transfer_Tests_Edinburgh_Endpoints|Edinburgh Endpoints]]<br><br />
[[Transfer_Tests_Birmingham_Endpoints|Birmingham Endpoints]]<br><br />
[[Transfer_Tests_RAL_T2_Endpoints|RAL T2 Endpoints]]<br><br />
<br />
===SC4 File Transfer Tests===<br />
<br />
{|border="1",cellpadding="1"<br />
|+GridPP Tier 2 SC4 File Transfer Tests Jul 06 - Dec 06<br />
<br />
|-style="background:#7C8AAF;color:white"<br />
!Site<br />
!July 2006<br />
!August 2006<br />
!September 2006<br />
!October 2006<br />
!November 2006<br />
!December 2006<br />
<br />
|-<br />
|[[RAL Tier1]]<br />
|<br />
|<span style="color:red">14: 2hrs -> [http://www.scotgrid.ac.uk/wiki GLA]</span><br><br />
<span style="color:red">14: 4hrs -> [http://www.scotgrid.ac.uk/wiki GLA]</span><br><br />
<span style="color:red">14: 4hrs -> [[Edinburgh|Ed]]</span><br><br />
<span style="color:red">15: 5hrs -> [http://www.scotgrid.ac.uk/wiki GLA]</span><br><br />
<span style="color:red">15: 5hrs -> [[Edinburgh|Ed]]</span><br><br />
<span style="color:red">15: 5hrs -> [[IC-HEP]]</span><br><br />
<span style="color:red">16: 5hrs -> [http://www.scotgrid.ac.uk/wiki GLA]</span><br><br />
<span style="color:red">16: 5hrs -> [[Edinburgh|Ed]]</span><br><br />
<span style="color:red">16: 5hrs -> [[IC-HEP]]</span><br><br />
<span style="color:red">16: 5hrs -> [[Bristol|BRIS]]</span><br />
|<br />
|<br />
|<br />
|<br />
<br />
|-style="background:#D6D4E3"<br />
!colspan="8" | [[ScotGrid]]<br />
<br />
|-<br />
|[[Durham]]<br />
|<br />
|<br />
|<span style="color:red">5: 24hrs -> [[Edinburgh|Ed]]</span><br><br />
<span style="color:red">6: 24hrs <- [[Edinburgh|Ed]]</span><br />
|<span style="color:red">3: 24hrs <-> [[Edinburgh|Ed]]</span><br />
|<br />
|<br />
<br />
|-<br />
|[[Edinburgh]]<br />
|<br />
|<span style="color:red">28: 6hrs -> [[RAL Tier2|RAL T2]]</span><br><br />
<span style="color:red">28: 6hrs <- [[RAL Tier2|RAL T2]]</span><br />
|<br />
|<br />
|<br />
|<br />
<br />
|-<br />
|[http://www.scotgrid.ac.uk/wiki Glasgow]<br />
|<br />
|<br />
|<br />
|<span style="color:green">27: 24hrs <- [[Edinburgh|Ed]] @235Mb/s[http://www.scotgrid.ac.uk/wiki/index.php/Glasgow_SC4#2006-10-27 *]</span><br><br />
<span style="color:green">28: 24hrs -> [[Edinburgh|Ed]] @66Mb/s[http://www.scotgrid.ac.uk/wiki/index.php/Glasgow_SC4#2006-10-27 *]</span><br />
|<span style="color:red">11: 24hrs <-> [[Edinburgh|Ed]]</span><br />
|<br />
<br />
|-style="background:#D6D4E3"<br />
!colspan="8" | [[NorthGrid]]<br />
<br />
|-<br />
|[[Lancaster]]<br />
|<br />
|<br />
|<span style="color:red">11: 24hrs -> [[Edinburgh|Ed]]</span><br><br />
<span style="color:red">12: 24hrs <- [[Edinburgh|Ed]]</span><br />
|<span style="color:red">4: 24hrs <-> [[Edinburgh|Ed]]</span><br />
|<br />
|<br />
<br />
|-<br />
|[[Liverpool]]<br />
|<br />
|<br />
|<span style="color:red">13: 24hrs -> [[Edinburgh|Ed]]</span><br><br />
<span style="color:red">14: 24hrs <- [[Edinburgh|Ed]]</span><br />
|<span style="color:red">5: 24hrs <-> [[Edinburgh|Ed]]</span><br />
|<br />
|<br />
<br />
|-<br />
|[[Manchester]]<br />
|<br />
|<br />
|<span style="color:red">7: 24hrs -> [[Edinburgh|Ed]]</span><br><br />
<span style="color:red">8: 24hrs <- [[Edinburgh|Ed]]</span><br />
|<br />
|<span style="color:green">11: 24hrs <-> [[Edinburgh|Ed]] @271/244</span><br />
|<br />
<br />
|-<br />
|[[Sheffield]]<br />
|<br />
|<br />
|<span style="color:red">15: 24hrs -> [[Edinburgh|Ed]]</span><br><br />
<span style="color:red">18: 24hrs <- [[Edinburgh|Ed]]</span><br />
|<span style="color:red">9: 24hrs <-> [[Edinburgh|Ed]]</span><br />
|<span style="color:red">13: 24hrs <-> [[Edinburgh|Ed]]</span><br />
|<br />
<br />
|-style="background:#D6D4E3"<br />
!colspan="8" | [[SouthGrid]]<br />
<br />
|-<br />
|[[Birmingham]]<br />
|<br />
|<span style="color:red">29: 6hrs -> [[Edinburgh|Ed]]</span><br><br />
<span style="color:red">29: 6hrs <- [[Edinburgh|Ed]]</span><br />
|<br />
|<br />
|<br />
|<br />
<br />
|-<br />
|[[Bristol]]<br />
|<br />
|<br />
|<span style="color:red">1: 24hrs -> [[Birmingham|BHAM]]</span><br><br />
<span style="color:red">4: 24hrs <- [[Birmingham|BHAM]]</span><br />
|<span style="color:red">2: 24hrs <-> [[Birmingham|BHAM]]<br />
|<span style="color:red">9: 24hrs <-> [[RAL Tier2|RAL T2]]<br />
|<br />
<br />
|-<br />
|[[Cambridge]]<br />
|<br />
|<br />
|<span style="color:red">5: 24hrs -> [[Birmingham|BHAM]]</span><br><br />
<span style="color:red">6: 24hrs <- [[Birmingham|BHAM]]</span><br />
|<span style="color:red">3: 24hrs <-> [[Birmingham|BHAM]]<br />
|<span style="color:red">13: 24hrs <-> [[RAL Tier2|RAL T2]]<br />
|<br />
<br />
|-<br />
|[[Oxford]]<br />
|<br />
|<br />
|<span style="color:red">7: 24hrs -> [[Birmingham|BHAM]]</span><br><br />
<span style="color:red">8: 24hrs <- [[Birmingham|BHAM]]</span><br />
|<span style="color:red">4: 24hrs <-> [[Birmingham|BHAM]]<br />
|<span style="color:red">9: 24hrs -> [[RAL Tier2|RAL T2]]<br><span style="color:red">14: 24hrs <-> [[RAL Tier2|RAL T2]]<br />
|<br />
<br />
|-<br />
|[[RAL Tier2]]<br />
|<br />
|<span style="color:red">30: 6hrs -> [[Birmingham|BHAM]]</span><br><br />
<span style="color:red">30: 6hrs <- [[Birmingham|BHAM]]</span><br />
|<br />
|<br />
|<br />
|<br />
<br />
|-style="background:#D6D4E3"<br />
!colspan="8" | [[London Tier2]]<br />
<br />
|-<br />
|[[Brunel]]<br />
| <br />
|<br />
|<span style="color:red">1: 24hrs -> [[RAL Tier2|RAL T2]]</span><br><br />
<span style="color:red">4: 24hrs <- [[RAL Tier2|RAL T2]]</span><br />
|<span style="color:red">2: 24hrs <-> [[RAL Tier2|RAL T2]]<br />
|<br />
|<br />
<br />
|-<br />
|[[IC-HEP]]<br />
|<br />
|<br />
|<span style="color:red">11: 24hrs -> [[Birmingham|BHAM]]</span><br><br />
<span style="color:red">12: 24hrs <- [[Birmingham|BHAM]]</span><br />
|<span style="color:red">5: 24hrs <-> [[Birmingham|BHAM]]<br />
|<br />
|<br />
<br />
|-<br />
|[[IC-LeSC]]<br />
|<br />
|<br />
|<span style="color:red">13: 24hrs -> [[Birmingham|BHAM]]</span><br><br />
<span style="color:red">14: 24hrs <- [[Birmingham|BHAM]]</span><br />
|<span style="color:red">6: 24hrs <-> [[Birmingham|BHAM]]</span><br />
|<br />
|<br />
<br />
|-<br />
|[[QMUL]]<br />
|<br />
|<br />
|<span style="color:red">5: 24hrs -> [[RAL Tier2|RAL T2]]</span><br><br />
<span style="color:red">6: 24hrs <- [[RAL Tier2|RAL T2]]</span><br />
|<span style="color:red">3: 24hrs <-> [[RAL Tier2|RAL T2]]</span><br />
|<br />
|<br />
<br />
|-<br />
|[[RHUL]]<br />
|<br />
|<br />
|<span style="color:red">7: 24hrs -> [[RAL Tier2|RAL T2]]</span><br><br />
<span style="color:red">8: 24hrs <- [[RAL Tier2|RAL T2]]</span><br />
|<span style="color:red">4: 24hrs <-> [[RAL Tier2|RAL T2]]</span><br />
|<br />
|<br />
<br />
|-<br />
|[[UCL-CENTRAL]]<br />
|<br />
|<br />
|<span style="color:red">11: 24hrs -> [[RAL Tier2|RAL T2]]</span><br><br />
<span style="color:red">12: 24hrs <- [[RAL Tier2|RAL T2]]</span><br />
|<span style="color:red">5: 24hrs <-> [[RAL Tier2|RAL T2]]</span><br />
|<br />
|<br />
<br />
|-<br />
|[[UCL-HEP]]<br />
|<br />
|<br />
|<span style="color:red">13: 24hrs -> [[RAL Tier2|RAL T2]]</span><br><br />
<span style="color:red">14: 24hrs <- [[RAL Tier2|RAL T2]]</span><br />
|<span style="color:red">6: 24hrs <-> [[RAL Tier2|RAL T2]]</span><br />
|<br />
|<br />
<br />
|-<br />
|}<br />
<br />
{|border="1",cellpadding="1"<br />
|+GridPP Tier 2 SC4 File Transfer Tests Dec 05 - Jun 06<br />
<br />
|-style="background:#7C8AAF;color:white"<br />
!Site<br />
!December 2005<br />
!January 2006<br />
!February 2006<br />
!March 2006<br />
!April 2006<br />
!May 2006<br />
!June 2006<br />
<br />
|-<br />
|[[RAL Tier1]]<br />
|<span style="color:green">7: 1TB -> [[RAL Tier2|RAL T2]] @197Mb/s[[RALPP_Local_SC4_Preparations#Status_at_08-12-2005|*]]</span><br><span style="color:green">13: 0.86TB -> [[RAL Tier2|RAL T2]] @194Mb/s[[RALPP_Local_SC4_Preparations#Status_at_14-12-2005|*]]</span><br><span style="color:green">22: 1TB -> [[Manchester|MAN]] @~320Mb/s[[Manchester#Transfer_Tests_2005-12-22|*]]</span><br />
| <span style="color:green">9: 1TB <- [http://www.scotgrid.ac.uk/wiki GLA] @331Mb/s[http://www.scotgrid.ac.uk/wiki Glasgow]</span><br><span style="color:green">10: 0.985TB -> [[RAL Tier2|RAL T2]] @397Mb/s[[RALPP_Local_SC4_Preparations#Status_at_11-01-2006|*]]</span><br><span style="color:green">12-23: @1200Mb/s <- [[CERN]] [[RAL_Tier1_SC3_Log|*]]<br><span style="color:green">25: 1TB <- [[RAL Tier2|RAL T2]]@388Mb/s[[RALPP_Local_SC4_Preparations#Status_at_26-01-2006|*]]</span><br><span style="color:green">25: 1TB -> [[Birmingham|BHAM]] @289Mb/s</span><br><span style="color:green">26: 1TB -> [http://www.scotgrid.ac.uk/wiki GLA]@166Mb/s[http://www.scotgrid.ac.uk/wiki/Glasgow_SC4#2006-01-26 *]</span><br><span style="color:green">26: 1TB -> [[Oxford|OX]] @252Mb/s</span><br><span style="color:green">27: 1TB <- [[Birmingham|BHAM]] @407Mb/s</span><br><span style="color:green">28: 1TB <- [[Oxford|OX]] @456Mb/s</span><br><span style="color:green"> 29: 1TB <- [[Birmingham|BHAM]] @461Mb/s</span><br><span style="color:green">29: 974GB <- [[Manchester|MAN]]@152.755</span><br><span style="color:green">31: 1TB <- [[Manchester|MAN]]@143.988</span><br><span style="color:green">30: 1TB <- [[Oxford|OX]]@456Mb/s</span><br />
|<span style="color:green">7: 1TB -> [[Edinburgh|ED]]</span><br><span style="color:green">8: 1TB -> [[Liverpool|LIV]][[Liverpool#Transfer_Tests_2006-02-08|*]]</span><br><span style="color:green">3: 1TB <- [[Liverpool|LIV]][[Liverpool#Transfer_Tests_2006-02-08|*]]</span><br><span style="color:green">14: 1TB <- [[QMUL]]@118Mb/s</span><br><span style="color:green">15: 1TB ->[[QMUL]]@172Mb/s</span><br><span style="color:green">16: 96GB <- [[Bristol|BRIS]] @85Mb/s</span><br><span style="color:green">16: 240GB <- [[Cambridge|CAM]] @74Mb/s</span><br>17-20: RAL T1 shut down<br />
|<span style="color:green">28-3 [[SC4 Aggregate Throughput|48hr]] -> [[Birmingham|BHAM]],[http://www.scotgrid.ac.uk/wiki GLA],[[Manchester|MAN]],[[QMUL]]</span><br><span style="color:green">7-10 [[SC4 Aggregate Throughput|48hr]] -> [[Birmingham|BHAM]],[http://www.scotgrid.ac.uk/wiki GLA],[[Manchester|MAN]],[[QMUL]]</span><br><span style="color:green">15-16 [[SC4 Aggregate Throughput|48hr]] -> [[Birmingham|BHAM]],[http://www.scotgrid.ac.uk/wiki GLA],[[Manchester|MAN]],[[QMUL]],[[Lancaster|LANC]]@1100Mb/s[[SC4 Aggregate Throughput#Thursday 2006-03-16|*]]</span><br><span style="color:green">22-23 [[SC4 Aggregate Throughput|48hr]] <- [[Birmingham|BHAM]],[http://www.scotgrid.ac.uk/wiki GLA],[[Manchester|MAN]],[[QMUL]],[[Lancaster|LANC]]</span><br />
|<br />
|<br />
|<br />
<br />
|-style="background:#D6D4E3"<br />
!colspan="8" | [[NorthGrid]]<br />
<br />
|-<br />
|[[Lancaster]]<br />
|<span style="color:green">12: 50GB <- [[RAL Tier1|T1]] @150Mb/s</span><br><br />
<span style="color:green">20: 1TB <- [[RAL Tier1|T1]] @~800Mb/s</span><br><br />
<span style="color:green">19: 1TB -> [[RAL Tier1|T1]] @0Mb/s</span><br><br />
|<span style="color:green">9: 1.6TB -> [[RAL Tier1|T1]] @500Mb/srmcp only(noFTS)</span><br><br />
|<br />
|<br />
|<br />
|<br />
|<br />
<br />
|-<br />
|[[Liverpool]]<br />
|<br />
|<br />
|<span style="color:green">2: 1TB <- [[RAL Tier1|RAL T1]][[Liverpool#Transfer_Tests_2006-02-08|*]]<br />
@ 88 Mb/s</span><br><span style="color:green">8: 1TB -> [[RAL Tier1|RAL T1]][[Liverpool#Transfer_Tests_2006-02-08|*]]<br />
@ 24 Mb/s</span><br />
|<br />
|<span style="color:red">18 1TB<-[http://www.scotgrid.ac.uk/wiki GLA]</span><br><span style="color:red">19 1TB->[http://www.scotgrid.ac.uk/wiki GLA]</span><br />
|<br />
|<br />
<br />
|-<br />
|[[Manchester]]<br />
| <span style="color:green">21: 250GB <- [[RAL Tier1|RAL]] @320Mb/s[[Manchester#Transfer_Tests_2005-12-21|*]]</span><br><span style="color:green">22: 1TB <- [[RAL Tier1|RAL]] @~320Mb/s[[Manchester#Transfer_Tests_2005-12-22|*]]</span><br />
|<span style="color:green">29: 974GB -> [[RAL Tier1|RAL]]@152.755Mb/s[[Manchester#Transfer_Tests_2006-01-29|*]]</span><br><span style="color:green">31: 1TB -> [[RAL Tier1|RAL]]@143.988Mb/s[[Manchester#Transfer_Tests_2006-01-31|*]]</span><br />
|<br />
|<br />
|<br />
|<br />
|<br />
<br />
<br />
|-<br />
|[[Sheffield]]<br />
|<br />
|<br />
|<br />
|<br />
|<span style="color:green">12 0.542TB<-[http://www.scotgrid.ac.uk/wiki GLA]@100.7Mb/s[[Sheffield_Transfer_Test_Log#12-04-06|*]]</span><br><span style="color:green">13 0.25TB<-[http://www.scotgrid.ac.uk/wiki GLA]@144Mb/s[[Sheffield_Transfer_Test_Log#13-04-06|*]]</span><br><span style="color:green">13 1TB->[http://www.scotgrid.ac.uk/wiki GLA]@414Mb/s[[Sheffield_Transfer_Test_Log#13-04-06|*]]</span><br />
|<br />
|<br />
<br />
|-style="background:#D6D4E3"<br />
!colspan="8" | [[ScotGrid]]<br />
<br />
|-<br />
|[[Durham]]<br />
|<br />
|<span style="color:green">18: 500GB <- [http://www.scotgrid.ac.uk/wiki GLA]@88Mb/s[[Durham SC4#2006-01-18|*]]</span><br />
|<br />
|<span style="color:green">20 194GB<-[[RAL Tier1|RAL T1]]@193Mb/s[[Durham SC4#2006-03-20|*]]</span><br />
|<br />
|<br />
| <span style="color:green">20 100GB->[http://www.scotgrid.ac.uk/wiki GLA]@176Mb/s[[Durham SC4#2006-06-02|*]]</span><br />
<br />
<br />
|-<br />
|[[Edinburgh]]<br />
| <span style="color:green">2: 100GB -> [http://www.scotgrid.ac.uk/wiki Gla] @200Mb/s</span><br><span style="color:green">13: 25GB <- [[RAL Tier1|T1]] @11Mb/s</span>[[Ed_SC4_Dcache_Tests#13.2F12.2F05|*]]<br><span style="color:green">14: 500MB <- [[RAL Tier1|T1]]</span>[[Ed_SC4_Dcache_Tests#14.2F12.2F05|*]]<br><span style="color:green">14: 10GB <- [[RAL Tier1|T1]]</span>[[Ed_SC4_Dcache_Tests#14.2F12.2F05|*]]<br><span style="color:green">14: 1GB <- [[RAL Tier1|T1]] </span>[[Ed_SC4_Dcache_Tests#14.2F12.2F05|*]]<br><span style="color:green">16: various tests to/from [[RAL Tier1|T1]]</span>[[Ed_SC4_Dcache_Tests#16.2F12.2F05|*]]<br><span style="color:green">16: 15GB -> [[RAL Tier1|T1]] @422Mb/s</span>[[Ed_SC4_Dcache_Tests#Ed_to_RAL|*]]<br><span style="color:green">17: 15GB <- [[RAL Tier1|T1]] @156Mb/s</span>[[Ed_SC4_Dcache_Tests#17.2F12.2F05|*]]<br><span style="color:green">17: 10GB <- [http://www.scotgrid.ac.uk/wiki Gla] @57Mb/s</span>[[Ed_SC4_Dcache_Tests#GLA-ED|*]]<br><span style="color:green">17: 10GB -> [http://www.scotgrid.ac.uk/wiki Gla] @122Mb/s</span>[[Ed_SC4_Dcache_Tests#ED-GLA|*]]<br />
|<span style="color:green">9: 1TB -> [[RAL Tier1|RAL]] @440Mb/s[[Ed_SC4_Dcache_Tests#12.2F01.2F06|*]]</span><br />
|<span style="color:green">6: 1TB <- [[RAL Tier1|RAL]]</span><br />
|<br />
|<span style="color:green">18: 0.25TB <- [[Sheffield|SHEF]]@203Mb/s[[Ed_SC4_Dcache_Tests#18.2F04.2F06|*]]</span><br />
|<br />
|<br />
<br />
|-<br />
|[http://www.scotgrid.ac.uk/wiki Glasgow]<br />
| <span style="color:green">2: 100GB <- [[Edinburgh|Ed]] @200Mb/s</span><br><span style="color:green">22: 5x20GB <- [[Edinburgh|Ed]] @~210Mb/s[http://www.scotgrid.ac.uk/wiki/index.php/Glasgow_SC4#2005-12-22 *]</span><br><span style="color:green">24: 1TB <- [[Edinburgh|Ed]] @224Mb/s[http://www.scotgrid.ac.uk/wiki/index.php/Glasgow_SC4#2005-12-24 *]</span><br />
| <span style="color:green">9: 1TB -> [[RAL Tier1|RAL]] @331Mb/s[http://www.scotgrid.ac.uk/wiki/index.php/Glasgow_SC4#2006-01-09 *]</span><br><span style="color:green">16: 1TB -> [[Birmingham|BHAM]] @317Mb/s[[Birmingham_SC4#2006-01-16|*]]</span><br><span style="color:green">17: 66GB -> [[QMUL]]@80Mb/s[[QMUL#17.2F01.2F2006|*]]</span><br><span style="color:green">18: 500GB -> [[Durham|DUR]]@88Mb/s[[Durham SC4#2006-01-18|*]]</span><br><span style="color:green">20: 200GB -> [[RHUL]]@59Mb/s[[RHUL_site_log#2006-01-20|*]]</span><br><span style="color:green">26: 1TB <- [[RAL Tier1|RAL]]@166Mb/s[http://www.scotgrid.ac.uk/wiki/index.php/Glasgow_SC4#2006-01-26 *]</span><br />
|<br />
|<br />
|<br />
|<br />
|<br />
<br />
|-style="background:#D6D4E3"<br />
!colspan="8" | [[London Tier2]]<br />
<br />
|-<br />
|[[Brunel]]<br />
| <br />
| <span style="color:green">25: 2GB <- [http://www.scotgrid.ac.uk/wiki GLA] @ 32Mb/s</span><br />
|<br />
|<span style="color:green">24: 92GB->[[RAL Tier1|RAL T1]] @ 59Mb/s</span><br />
| <span style="color:green">8: 135GB <- [http://www.scotgrid.ac.uk/wiki GLA] @ 57Mb/s</span><br />
|<br />
|<br />
<br />
|-<br />
|[[IC-HEP]]<br />
| <span style="color:green">16: 250MB <- [[RAL Tier1|T1]]@84Mb/s[[IC-HEP#Transfer Tests 2005-12-16|*]]</span><br />
| <br />
|<span style="color:green">20: 140GB -> [[RAL Tier1|RAL]]@190Mb/s[[IC-HEP#Transfer_Tests_2006-02-20|*]]</span><br><span style="color:green">22: <- [[RAL Tier1|RAL]]@80Mb/s[[IC-HEP#Transfer_Tests_2006-02-22|*]]</span><br />
|<br />
|<br />
|<br />
|<br />
<br />
|-<br />
|[[IC-LeSC]]<br />
|<br />
|<br />
|<br />
|<br />
|<span style="color:green">10 250GB->[http://www.scotgrid.ac.uk/wiki GLA]@95.27Mbit/sec[[IC-LeSC#April_10_2006:_LeSC_to_GLA|*]]</span><br><span style="color:green">11 250GB<-[http://www.scotgrid.ac.uk/wiki GLA]@156.16Mbit/sec[[IC-LeSC#April_11_2006:_GLA_to_LeSC|*]]</span><br />
|<br />
|<br />
<br />
|-<br />
|[[QMUL]]<br />
|<br />
|<span style="color:green">16: 50GB test. 67Mb/s [[QMUL#16.2F01.2F2006|*]]</span><br><br />
<span style="color:green">17: 10GB test. 64Mb/s [[QMUL#16.2F01.2F2006|*]]</span><br><br />
<span style="color:green">17: 66GB test. ~80Mb/s [[QMUL#16.2F01.2F2006|*]]</span><br />
|<span style="color:green">14: 1TB -> [[RAL Tier1|RAL]]@172Mb/s[[QMUL#SC4_Transfer_Test|*]]</span><br><span style="color:green">15: 1TB <- [[RAL Tier1|RAL]]@118Mb/s[[QMUL#SC4_Transfer_Test|*]]</span><br />
|<br />
|<br />
|<br />
|<br />
<br />
|-<br />
|[[RHUL]]<br />
|<br />
|<span style="color:green">19: 2GB <- [http://www.scotgrid.ac.uk/wiki GLA] @35Mb/s (Wed day)[[RHUL_site_log#2006-01-19-am|*]]</span><br><br />
<span style="color:green">19: 53GB <- [http://www.scotgrid.ac.uk/wiki GLA] @49Mb/s (Wed night)[[RHUL_site_log#2006-01-19-evening|*]]</span><br><br />
<span style="color:green">20: 200 GB <- [http://www.scotgrid.ac.uk/wiki GLA] @59Mb/s (Fri night)[[RHUL_site_log#2006-01-20|*]]</span><br><br />
|<br />
|<br />
|<br />
|<span style="color:green">16: 128GB<-[http://www.scotgrid.ac.uk/wiki Gla] @35Mb/s</span><br><br />
<span style="color:green">18: 250GB->[http://www.scotgrid.ac.uk/wiki Gla] @58Mb/s</span><br><br />
<span style="color:red">19: 250GB<-[http://www.scotgrid.ac.uk/wiki Gla]</span><br />
|<br />
<br />
|-<br />
|[[UCL-HEP]]<br />
|<br />
|<br />
|<br />
|<span style="color:green">21: 421GB<-[[RAL Tier1|RAL T1]]@71Mb/s[[UCL-HEP#21-22.2F03.2F2006|*]]</span><br><span style="color:green">25: 421GB->[[RAL Tier1|RAL T1]]@63Mb/s[[UCL-HEP#24-25.2F03.2F2006|*]]</span><br />
|<br />
|<br />
|<br />
<br />
|-<br />
|[[UCL-CENTRAL]]<br />
|<br />
|<br />
| <br />
|<br />
|<span style="color:green">26 (not office hours): 250GB<-[http://www.scotgrid.ac.uk/wiki GLA]@90Mb/s[[UCL-CENTRAL|*]]</span><br><span style="color:green">27 (not office hours): 250GB->[http://www.scotgrid.ac.uk/wiki GLA]@309Mb/s[[UCL-CENTRAL|*]]</span><br />
|<br />
|<span style="color:green">06/06 (office hours): 8GB<-[[Edinburgh]]@266Mb/s[[UCL-CENTRAL|*]]</span><br />
<br />
|-style="background:#D6D4E3"<br />
!colspan="8" | [[SouthGrid]]<br />
<br />
|-<br />
|[[Birmingham]]<br />
|<br />
|<span style="color:green">16: 1TB <- [http://www.scotgrid.ac.uk/wiki GLA] @317Mb/s[[Birmingham_SC4#2006-01-16|*]]</span><br><span style="color:green">25: 1TB <- [[RAL Tier1|RAL T1]] @ 289Mb/s</span><br><span style="color:green">28: 1TB -> [[RAL Tier1|RAL T1]] @407Mb/s</span><br><span style="color:green">29: 1TB -> [[RAL Tier1|RAL T1]] @461Mb/s<br><br />
|<br />
|<br />
|<br />
|<br />
|<br />
<br />
|-<br />
|[[Bristol]]<br />
|<br />
|<br />
|<span style="color:green">16: 96GB -> [[RAL Tier1|RAL]] @85Mb/s</span><br />
|<br />
|<span style="color:green">17: 500GB->[http://www.scotgrid.ac.uk/wiki GLA] @291Mb/s </span><br><span style="color:green">17: 500GB <-[http://www.scotgrid.ac.uk/wiki GLA] @117Mb/s</span><br />
|<br />
|<br />
<br />
|-<br />
|[[Cambridge]]<br />
|<br />
|<br />
|<span style="color:green">16: 240GB -> [[RAL Tier1|RAL]] @74Mb/s</span><br><br />
|<br />
|<span style="color:green">20: 787GB->[http://www.scotgrid.ac.uk/wiki GLA] @153Mb/s</span><br><span style="color:green">21: 1TB<-[http://www.scotgrid.ac.uk/wiki GLA] @293Mb/s</span><br />
|<br />
|<br />
<br />
|-<br />
|[[Oxford]]<br />
|<br />
|<span style="color:green">26: 1TB <- [[RAL Tier1|RAL T1]] @252Mb/s</span><br><span style="color:green">30: 1TB -> [[RAL Tier1|RAL T1]]@456Mb/s</span><br />
|<br />
|<br />
|<br />
|<br />
|<br />
<br />
|-<br />
|[[RAL Tier2]]<br />
| <span style="color:green">7: 1TB <- [[RAL Tier1|T1]] @197Mb/s[[RALPP_Local_SC4_Preparations#Status_at_08-12-2005|*]]</span><br><span style="color:green">8: 2GB <- [[RAL Tier1|T1]] @309Mb/s[[RALPP_Local_SC4_Preparations#Status_at_12-12-2005|*]]</span><br><span style="color:green">13: 0.86TB <- [[RAL Tier1|T1]] @194Mb/s[[RALPP_Local_SC4_Preparations#Status_at_14-12-2005|*]]</span><br />
|<span style="color:green">10: 0.985TB <- [[RAL Tier1|T1]] @397Mb/s[[RALPP_Local_SC4_Preparations#Status_at_11-01-2006|*]]</span><br><span style="color:green">25: 1TB -> [[RAL Tier1|RAL T1]]@388Mb/s[[RALPP_Local_SC4_Preparations#Status_at_26-01-2006|*]]</span><br />
|<br />
|<br />
|<br />
|<br />
|<br />
|-<br />
<br />
|}<br />
<br />
===Table Format===<br />
<br />
To keep this table tidy and legible, please follow these notes on Table Format:<br />
# Use <span style="color:green">green</span> for completed tests and <span style="color:red">red</span> for scheduled tests. Get these using, e.g., <nowiki><span style="color:green">green</span></nowiki>.<br />
# At your discretion, leave out smaller tests (say <100GB), which can instead be recorded in your site page. This table is meant to give a high level view of the larger tests, not to record every single transfer you submit.<br />
# Prefix each test with the day, then a colon, e.g., for a test starting on the fifth, use "5:".<br />
# Use "->" to indicate "to", and "<-" to indicate "from".<br />
# Use a line break in separate tests done in the same month. (<nowiki><br></nowiki>)<br />
# Once a test has been done, indicate a rate in Mb/s (bits, not bytes!).<br />
# Any extra information about a transfer can be put into a link back your local site page, using a * as the link text, e.g., <nowiki>[[Glasgow#Transfer 2005 12 12|*]]</nowiki>. Thanks to [[User:Chris brew|Chris]] for that idea.<br />
# The convention employed when deciding which row to put a transfer planned/result in the tables is that it shall go in the row corresponding to the site being tested. e.g. in August 06 it was the Castor filesystem at RAL T1 being tested whereas in September 06 it was the non-reference sites bandwidth capibilities being tested.<br />
<br />
[[Category: Service Challenge 4]]</div>Michael kenyonhttps://www.gridpp.ac.uk/wiki/Service_Challenge_4_Site_StatusService Challenge 4 Site Status2008-01-24T14:10:44Z<p>Michael kenyon: </p>
<hr />
<div>The '''service challenge 4 site status''' table aims to show what sites are planning and have completed each month leading up to [[Service Challenge 4|service challenge 4]]. A more detailed description of the service may be available from the individual site pages. <br />
<br />
{|border="1",cellpadding="1"<br />
|+GridPP Site Status for Service Challenge 4<br />
<br />
|-style="background:#7C8AAF;color:white"<br />
!Site<br />
!October 2005<br />
!November 2005<br />
!December 2005<br />
!January 2006<br />
!February 2006<br />
!March 2006<br />
<br />
<br />
<br />
|-<br />
|[[RAL Tier1]]<br />
|[[DCache|dCache]] setup and in production, [[LCG File Catalog|LFC]] setup, Alice [[VOBox]] setup. The [[RAL_Tier1_File_Transfer_Service|RAL FTS]] service is installed at version 1.3. A<br />
[[VOBox]] for [[ATLAS]] will be set up before the end of the month.<br />
|[[VOBox]] for [[ALICE]] and [[ATLAS]] is still going on, it was done but requirments are changing.<br />
|[[VOBox]] for [[LHCb]] will set up for the end of the year.<br />
|<br />
|<br />
|<br />
<br />
|-style="background:#D6D4E3"<br />
!colspan="7" | [[:Category:NorthGrid]]<br />
<br />
|-<br />
|[[Lancaster]]<br />
|[[DCache|dCache]] done, ukLight done<br />
|<br />
|[[LCG File Catalog|LFC]] done<br />
|<br />
|<br />
|<br />
<br />
|-<br />
|[[Liverpool]]<br />
|[[DCache|dCache]] done<br />
|<br />
|<br />
|[[Service_Challenge_Transfer_Tests|FTS]]<br />
|<br />
|<br />
<br />
|-<br />
|[[Manchester]]<br />
|[[DCache|dCache]] done<br />
|<br />
|[[LCG File Catalog|LFC]] done, [[Service_Challenge_Transfer_Tests|FTS]] done<br />
|<br />
|<br />
|<br />
<br />
|-<br />
|[[Sheffield]]<br />
|[[DCache|dCache]]<br />
|<br />
|<br />
|<br />
|<br />
|<br />
<br />
<br />
|-style="background:#D6D4E3"<br />
!colspan="7" | [[ScotGrid]]<br />
<br />
|-<br />
|[[Durham]]<br />
|<br />
| [[Disk Pool Manager|DPM]] done. [[LCG File Catalog|LFC]] done.<br />
|<br />
|<br />
|<br />
|<br />
<br />
|-<br />
|[[Edinburgh]]<br />
| [[DCache|dCache]] and [[Disk_Pool_Manager|DPM]] already.<br />
|<br />
| [[LCG File Catalog | LFC]] done.<br />
|<br />
|<br />
|<br />
<br />
|-<br />
|[http://www.scotgrid.ac.uk/wiki Glasgow]<br />
| [[Disk Pool Manager|DPM]] already.<br />
| [[LCG File Catalog | LFC]] done.<br />
|<br />
|<br />
|<br />
|<br />
<br />
<br />
|-style="background:#D6D4E3"<br />
!colspan="7" | [[London Tier2]]<br />
<br />
|-<br />
|[[Brunel]]<br />
| <br />
|<br />
|<br />
| [[DPM|DPM]] (Done)<br />
|<br />
|<br />
<br />
|-<br />
|[[IC-HEP]]<br />
| [[dCache]], FTS installed <br />
|<br />
| [[LCG File Catalog|LFC]] done.<br />
| <br />
|<br />
|<br />
<br />
|-<br />
|[[IC-LeSC]]<br />
| [[DPM|DPM]] (test)<br />
|<br />
|<br />
|<br />
|<br />
|<br />
<br />
|-<br />
|[[QMUL]]<br />
|<br />
| [[DPM|DPM]] installed on PoolFS<br />
|<br />
|<br />
|<br />
|<br />
<br />
|-<br />
|[[RHUL]]<br />
|<br />
|<br />
| [[DPM]] Done. <br />
| <br />
|<br />
|<br />
<br />
|-<br />
|[[UCL-HEP]]<br />
|<br />
| [[DPM]] installed migrated from classic SE. In production<br />
|<br />
|<br />
|<br />
| [[Service_Challenge_Transfer_Tests|FTS]] done. [[LCG File Catalog|LFC]] installed<br />
<br />
|-<br />
|[[UCL-CCC]]<br />
|<br />
|<br />
| [[DPM]] not fixed <br />
| <br />
|<br />
|<br />
<br />
|-style="background:#D6D4E3"<br />
!colspan="7" | [[SouthGrid]]<br />
<br />
|-<br />
|[[Birmingham]]<br />
| DPM/SRM installed (migrated from classic SE - see [[Migration from a classic to a DPM/SRM SE]])<br />
| [[LCG File Catalog|LFC]] installed - working on [[EGEE_File_Transfer_Service|FTS]]<br />
|[[EGEE_File_Transfer_Service|FTS]] is installed<br />
|<br />
|<br />
|<br />
<br />
|-<br />
|[[Bristol]]<br />
| DPM/SRM installed <br />
|<br />
|<br />
|[[LCG File Catalog|LFC]] is installed<br />
|<br />
|<br />
<br />
|-<br />
|[[Cambridge]]<br />
| Migrated from classic to DPM/SRM SE<br />
|<br />
|[[LCG File Catalog|LFC]] is installed<br />
|<br />
|<br />
|<br />
<br />
|-<br />
|[[Oxford]]<br />
| DPM/SRM installed, need to add one disk pool<br />
|<br />
|[[LCG File Catalog|LFC]] is installed<br />
|<br />
|<br />
|<br />
<br />
|-<br />
|[[RAL Tier2]]<br />
|[[dCache]] is installed.<br />
|[[EGEE_File_Transfer_Service|FTS]] is installed<br />
|<br />
|<br />
|<br />
|<br />
<br />
|-<br />
<br />
|}<br />
<br />
==See also==<br />
*[[Service Challenges]]<br />
*[[Service Challenge 4]]<br />
*[[Service Challenge Transfer Tests]]<br />
<br />
==Other resources==<br />
<br />
<br />
[[Category:Service Challenge 4]]</div>Michael kenyonhttps://www.gridpp.ac.uk/wiki/ScotGridScotGrid2008-01-24T14:09:32Z<p>Michael kenyon: </p>
<hr />
<div>ScotGrid is comprised of the Universities of [[Durham]], [[Edinburgh]] and [http://www.scotgrid.ac.uk/wiki Glasgow].<br />
<br />
==Operations==<br />
<br />
===Cross Site Support===<br />
<br />
[[ScotGrid Cross Site Support]] provides the ability for sys admins across the ScotGrid T2 to intervene at all clusters.<br />
<br />
==Contacts==<br />
<br />
===Security===<br />
<br />
====[http://www.scotgrid.ac.uk/wiki Glasgow]====<br />
<br />
* Site: grid-cert@physics.gla.ac.uk (As published in [https://goc.grid-support.ac.uk/gridsite/gocdb2/index.php?siteSelect=345 GOC])<br />
* Department (Physics and Astronomy): cert@physics.gla.ac.uk<br />
* University: cert@gla.ac.uk<br />
<br />
====[[Edinburgh]]====<br />
<br />
* All levels: irt@ed.ac.uk (As published in [https://goc.grid-support.ac.uk/gridsite/gocdb2/index.php?siteSelect=63 GOC])<br />
<br />
====[[Durham]]====<br />
<br />
* Site: oper.ip3@durham.ac.uk (As published in [https://goc.grid-support.ac.uk/gridsite/gocdb2/index.php?siteSelect=111 GOC])<br />
* Department (Physics Department): Physics.ITS@durham.ac.uk<br />
* University: m.j.costello@durham.ac.uk<br />
<br />
==Other resources==<br />
* [http://www.scotgrid.ac.uk/ ScotGrid HomePage]<br />
* [http://scotgrid.blogspot.com/ ScotGrid Blog] (with [http://scotgrid.blogspot.com/atom.xml feed])<br />
* [[Scotgrid-Dashboard]]<br />
<br />
[[Category:ScotGrid]]</div>Michael kenyonhttps://www.gridpp.ac.uk/wiki/Category:GlasgowCategory:Glasgow2008-01-24T14:06:56Z<p>Michael kenyon: </p>
<hr />
<div>All that is right and good pertaining to the Grid Cluster in [http://www.scotgrid.ac.uk/wiki Glasgow].<br />
<br />
[[Category: ScotGrid]]</div>Michael kenyon