https://www.gridpp.ac.uk/w/api.php?action=feedcontributions&user=Aggarwa&feedformat=atom
GridPP Wiki - User contributions [en]
2024-03-29T11:17:17Z
User contributions
MediaWiki 1.22.0
https://www.gridpp.ac.uk/wiki/Viking-Cluster
Viking-Cluster
2007-09-18T15:43:12Z
<p>Aggarwa: </p>
<hr />
<div>* Testing Procedure Setup<br />
<br />
<pre><br />
<br />
1. Add Viking CE to site BDII on mars-ce2.<br />
-- On mars-ce2 change the file /opt/bdii/etc/bdii-update.conf<br />
-- CE1 ldap://viking-ce0.viking.lesc.doc.ic.ac.uk:2135/mds-vo-name=local,o=grid<br />
-- Restart the bdii service.<br />
-- Check: <br />
ldapsearch -x -H ldap://mars-ce2.mars.lesc.doc.ic.ac.uk:2170 -b Mds-vo-name=UKI-LT2-IC-LeSC,o=Grid<br />
<br />
2. Run the following tests:<br />
<br />
* lcg-infosites --vo dteam ce | grep viking<br />
<br />
* globusrun -a -r viking-ce0.viking.lesc.doc.ic.ac.uk<br />
<br />
* globus-job-run vikingce0.viking.lesc.doc.ic.ac.uk /bin/hostname<br />
viking-ce0.viking.lesc.doc.ic.ac.uk<br />
<br />
* globus-job-submit viking-ce0.viking.lesc.doc.ic.ac.uk:2119/jobmanager-sge -q 10min /bin/hostname<br />
<br />
</pre></div>
Aggarwa
https://www.gridpp.ac.uk/wiki/FCR_UK
FCR UK
2007-09-05T13:49:43Z
<p>Aggarwa: </p>
<hr />
<div>* Python script to generate UK excluded sites html page.<br />
* Run : python fcr.py<br />
<br />
<pre><br />
#!/usr/bin/python<br />
<br />
import string<br />
import urllib<br />
<br />
<br />
def triplecomp(x,y):<br />
res=cmp(x[0].lower(),y[0].lower())<br />
return res <br />
<br />
def getfcrtable(url,region):<br />
fcrserver=urllib.urlopen(url)<br />
i=fcrserver.readline()<br />
result=[]<br />
sites=[]<br />
while(i):<br />
if (i.find('GlueVOViewLocalID')!=-1):<br />
el=i.split(',')<br />
vo=el[0].split('=')[-1]<br />
ce=el[1].split('=')[-1]<br />
site=el[2].split('=')[-1]<br />
if(ce.find('.uk:')!=-1):<br />
result.append([site,vo,ce])<br />
sites.append(site)<br />
i=fcrserver.readline()<br />
fcrserver.close()<br />
result.sort(triplecomp)<br />
return result<br />
<br />
def genhead():<br />
print '''<html><head><title>FCR Excluded Sites</title><br />
<style><br />
body { margin: 0px;<br />
padding: 0px;<br />
background-color: white;<br />
xx-font-size: 75%;<br />
font-size: 0.8em;<br />
line-height: 16px;<br />
text-align: center;<br />
font-family: Arial, Verdana, Geneva, sans-serif;<br />
}<br />
<br />
td { font-size: 0.8em; }<br />
<br />
td.maincell { font-size: 0.8em; }<br />
</style><br />
</head>'''<br />
<br />
def genfoot():<br />
print '''</body></html>'''<br />
<br />
def gentable(res):<br />
print '''<body>'''<br />
print '''<p align="left"><b>This Page shows the list of sites that are exluded from the toplevel bdii's by the <a href="https://lcg-fcr.cern.ch:8443/fcr/fcr.cgi">freedom of choice tool</a></b></p>'''<br />
print '''<div align="left">'''<br />
print '''<ul>'''<br />
print '''<li>Don't worry if your site is down</li>'''<br />
print '''<li><b>WORRY</b> if your site is not down</li>'''<br />
print '''<ul>'''<br />
print '''<li>Check the <a href="https://lcg-fcr.cern.ch:8443/fcr/fcr.cgi">FCR Web Page</a></li>'''<br />
print '''<li>Check the <a href="https://lcg-sam.cern.ch:8443/sam/sam.py">SAM</a> tests for the excluded VO</li>'''<br />
print '''</ul>'''<br />
print '''<li>For more information of the FCR please go <a href="https://lcg-fcr.cern.ch:8443/fcr/help.cgi">here</></li>'''<br />
print '''</ul>'''<br />
print '''</div>'''<br />
print '''<table border="0">'''<br />
print '''<tr>'''<br />
print '''<td align="center"><b>SITE NAME</b></td>'''<br />
print '''<td align="center"><b>VO</b></td>'''<br />
print '''<td align="center"><b>Queue Contact</b></td>'''<br />
print '''</tr>'''<br />
for line in res:<br />
print "<tr>"<br />
print '''<td bgcolor="#ccccc" align="center">'''+line[0]+"</td>"<br />
print '''<td bgcolor="#ccccc" align="center">'''+line[1]+"</td>"<br />
print '''<td bgcolor="#ccccc" align="center">'''+line[2]+"</td>"<br />
print "</tr>"<br />
print '''</table>'''<br />
<br />
<br />
res=getfcrtable("http://lcg-fcr.cern.ch:8083/fcr-data/exclude.ldif",'uk') <br />
genhead()<br />
gentable(res)<br />
genfoot()<br />
</pre></div>
Aggarwa
https://www.gridpp.ac.uk/wiki/IC-ICT_PBS-PRO_cluster_installation
IC-ICT PBS-PRO cluster installation
2007-03-15T14:53:20Z
<p>Aggarwa: </p>
<hr />
<div>== CE installation ==<br />
<br />
* Install lcg-CE rpms<br />
* up2date-nox -u lcg-CE<br />
<br />
<pre><br />
[root@hep-ce aggarwa]# up2date-nox -u lcg-CE --nosig --dry-run<br />
<br />
Fetching Obsoletes list for channel: rhel-i386-as-3...<br />
<br />
Fetching Obsoletes list for channel: glite3...<br />
<br />
Fetching obsoletes list for http://glitesoft.cern.ch/EGEE/gLite/APT/R3.0/rhel30/...<br />
<br />
Fetching http://glitesoft.cern.ch/EGEE/gLite/APT/R3.0/rhel30//headers/header.info...<br />
########################################<br />
####################<br />
Fetching Obsoletes list for channel: lcg2_CA...<br />
<br />
Fetching obsoletes list for http://linuxsoft.cern.ch/LCG-CAs/current...<br />
<br />
Fetching http://linuxsoft.cern.ch/LCG-CAs/current/headers/header.info...<br />
########################################<br />
####################<br />
Fetching rpm headers...<br />
########################################<br />
<br />
Name Version Rel<br />
----------------------------------------------------------<br />
lcg-CE 3.0.11 1 noarch<br />
<br />
<br />
Testing package set / solving RPM inter-dependencies...<br />
<br />
Downloading headers to solve dependencies...<br />
#######################################<br />
Downloading headers to solve dependencies...<br />
#######################################<br />
Downloading headers to solve dependencies...<br />
#######################################<br />
Downloading headers to solve dependencies...<br />
#######################################<br />
Downloading headers to solve dependencies...<br />
#######################################<br />
Downloading headers to solve dependencies...<br />
There was a package dependency problem. The message was:<br />
<br />
Unresolvable chain of dependencies:<br />
edg-java-security-1.5.11-1_sl3 requires j2sdk >= 1.4.1<br />
edg-java-security-client-1.5.11-1_sl3 requires j2sdk >= 1.4.1<br />
edg-java-security-test-1.5.11-1_sl3 requires j2sdk >= 1.4.1<br />
edg-wl-common-api-java-interface_gcc3_2_ requires j2sdk >= 1.4.1_01<br />
edg-wl-common-api-java_gcc3_2_2-lcg2.1.7 requires j2sdk >= 1.4.1_01<br />
edg-wl-ui-api-java-interface_gcc3_2_2-lc requires j2sdk >= 1.4.1_01<br />
edg-wl-ui-api-java_gcc3_2_2-lcg2.1.74-3_ requires j2sdk >= 1.4.1_01<br />
lcg-CE 3.0.11-1 requires j2sdk<br />
mysql 3.23.58-16.RHEL3.1 conflicts with MySQL<br />
<br />
<br />
The following packages were added to your selection to satisfy dependencies:<br />
Package Required by<br />
----------------------------------------------------------------------------<br />
boost-g3-1.29.1-06vh_sl3.i486 lcg-CE-3.0.11-1 boost-g3<br />
bouncycastle-jdk14-1.19-2.noarch lcg-CE-3.0.11-1 bouncycastle-jdk14<br />
edg-info-ce-lcg2.6.39-1_sl3.i386 lcg-CE-3.0.11-1 edg-info-ce<br />
edg-info-main-lcg3.0.23-1_sl3.i386 lcg-CE-3.0.11-1 edg-info-main<br />
edg-wl-bypass_gcc3_2_2-lcg2.5.3-29_sl3.i486lcg-CE-3.0.11-1 edg-wl-bypass_gcc3_2_2<br />
edg-wl-chkpt-api_gcc3_2_2-lcg2.1.74-3_sl3.i486lcg-CE-3.0.11-1 edg-wl-chkpt-api_gcc3_2_2<br />
edg-wl-common-api-java-interface_gcc3_2_2-lcg2.1.74-3_sl3.i486lcg-CE-3.0.11-1 edg-wl-common-api-java-interface_gcc3_2_2<br />
edg-wl-common-api-java_gcc3_2_2-lcg2.1.74-3_sl3.i486lcg-CE-3.0.11-1 edg-wl-common-api-java_gcc3_2_2<br />
edg-wl-common-api_gcc3_2_2-lcg2.1.74-3_sl3.i486lcg-CE-3.0.11-1 edg-wl-common-api_gcc3_2_2<br />
edg-wl-config_gcc3_2_2-lcg2.1.74-3_sl3.i486lcg-CE-3.0.11-1 edg-wl-config_gcc3_2_2<br />
edg-wl-locallogger_gcc3_2_2-lcg2.1.74-3_sl3.i486lcg-CE-3.0.11-1 edg-wl-locallogger_gcc3_2_2<br />
edg-wl-logging-api-c_gcc3_2_2-lcg2.1.74-3_sl3.i486lcg-CE-3.0.11-1 edg-wl-logging-api-c_gcc3_2_2<br />
edg-wl-logging-api-cpp_gcc3_2_2-lcg2.1.74-3_sl3.i486lcg-CE-3.0.11-1 edg-wl-logging-api-cpp_gcc3_2_2<br />
edg-wl-logging-api-sh_gcc3_2_2-lcg2.1.74-3_sl3.i486lcg-CE-3.0.11-1 edg-wl-logging-api-sh_gcc3_2_2<br />
edg-wl-services-common_gcc3_2_2-lcg2.1.74-3_sl3.i486lcg-CE-3.0.11-1 edg-wl-services-common_gcc3_2_2<br />
edg-wl-ui-api-cpp_gcc3_2_2-lcg2.1.74-3_sl3.i486lcg-CE-3.0.11-1 edg-wl-ui-api-cpp_gcc3_2_2<br />
edg-wl-ui-api-java-interface_gcc3_2_2-lcg2.1.74-3_sl3.i486lcg-CE-3.0.11-1 edg-wl-ui-api-java-interface_gcc3_2_2<br />
edg-wl-ui-api-java_gcc3_2_2-lcg2.1.74-3_sl3.i486lcg-CE-3.0.11-1 edg-wl-ui-api-java_gcc3_2_2<br />
edg-wl-ui-cli_gcc3_2_2-lcg2.1.74-3_sl3.i486lcg-CE-3.0.11-1 edg-wl-ui-cli_gcc3_2_2<br />
edg-wl-ui-config_gcc3_2_2-lcg2.1.74-3_sl3.i486lcg-CE-3.0.11-1 edg-wl-ui-config_gcc3_2_2<br />
edg-wl-ui-gui_gcc3_2_2-lcg2.1.74-3_sl3.i486lcg-CE-3.0.11-1 edg-wl-ui-gui_gcc3_2_2<br />
gridice-sensor-1.6.0-23.i386 lcg-CE-3.0.11-1 gridice-sensor<br />
lcg-version-3.0.2-1.noarch lcg-CE-3.0.11-1 lcg-version<br />
perl-SOAP-Lite-0.55-sl3.i386 lcg-CE-3.0.11-1 perl-SOAP-Lite<br />
perl-Tie-Syslog-1.07-1.i386 lcg-CE-3.0.11-1 perl-Tie-Syslog<br />
perl-URI-1.21-7.noarch edg-pool2info-1.0.1-1_sl3 perl-URI<br />
perl-URI-1.21-7.noarch edg-netmon-info-provider-1.0.8-1_sl3 perl-URI<br />
perl-XML-SAX-Base-1.04-1.i386 lcg-CE-3.0.11-1 perl-XML-SAX-Base<br />
perl-ldap-0.31-sl3.i386 lcg-CE-3.0.11-1 perl-ldap<br />
<br />
[root@hep-ce aggarwa]# up2date-nox -u j2sdk --nosig --dry-run<br />
<br />
Fetching Obsoletes list for channel: rhel-i386-as-3...<br />
<br />
Fetching Obsoletes list for channel: glite3...<br />
<br />
Fetching Obsoletes list for channel: lcg2_CA...<br />
<br />
Fetching rpm headers...<br />
########################################<br />
<br />
Name Version Rel<br />
----------------------------------------------------------<br />
<br />
All packages are currently up to date<br />
<br />
</pre><br />
<br />
* install j2sdk<br />
* install perl-Net-LDAP<br />
<br />
* Re-try lcg-CE installation<br />
<br />
<br />
<pre><br />
Installing...<br />
1:glite-rgma-base ########################################### [100%]<br />
2:log4j ########################################### [100%]<br />
3:lcg-info-generic ########################################### [100%]<br />
4:glite-apel-core ########################################### [100%]<br />
5:perl-URI ########################################### [100%]<br />
6:glite-essentials-java ########################################### [100%]<br />
7:gpt ########################################### [100%]<br />
<br />
......<br />
<br />
<br />
<br />
070315 13:53:44 [Warning] Asked for 196608 thread stack, but got 126976<br />
/usr/bin/mysql_install_db: line 1: 23723 Segmentation fault /usr/sbin/mysqld --bootstrap --skip-grant-tables --basedir=/ --datadir=/var/lib/mysql --skip-innodb --skip-bdb --skip-ndbcluster --user=mysql --max_allowed_packet=8M --net_buffer_length=16K<br />
cat: write error: Broken pipe<br />
Installation of system tables failed!<br />
<br />
Examine the logs in /var/lib/mysql for more information.<br />
You can also try to start the mysqld daemon with:<br />
/usr/sbin/mysqld --skip-grant &<br />
You can use the command line tool<br />
/usr/bin/mysql to connect to the mysql<br />
database and look at the grant tables:<br />
<br />
shell> /usr/bin/mysql -u root mysql<br />
mysql> show tables<br />
<br />
Try 'mysqld --help' if you have problems with paths. Using --log<br />
gives you a log in /var/lib/mysql that may be helpful.<br />
<br />
The latest information about MySQL is available on the web at<br />
http://www.mysql.com<br />
Please consult the MySQL manual section: 'Problems running mysql_install_db',<br />
and the manual section that describes problems on your OS.<br />
Another information source is the MySQL email archive.<br />
Please check all of the above before mailing us!<br />
And if you do mail us, you MUST use the /usr/bin/mysqlbug script!<br />
Starting MySQL...................................[FAILED]<br />
60:MySQL-client ########################################### [100%]<br />
61:edg-wl-ui-api-cpp_gcc3_########################################### [100%]<br />
62:edg-wl-logging-api-sh_g########################################### [100%]<br />
63:edg-wl-locallogger_gcc3########################################### [100%]<br />
64:edg-wl-ui-gui_gcc3_2_2 ########################################### [100%]<br />
65:edg-lcas_gcc3_2_2-voms_########################################### [100%]<br />
66:glite-security-voms-adm########################################### [100%]<br />
67:edg-java-data-util ########################################### [100%]<br />
68:lcg-auditlog ########################################### [100%]<br />
69:xerces-j1 ########################################### [100%]<br />
70:glite-rgma-command-line########################################### [100%]<br />
71:vdt_globus_info_server ########################################### [100%]<br />
72:lcg-extra-jobmanagers ########################################### [100%]<br />
73:glite-rgma-log4cpp ########################################### [100%]<br />
74:edg-wl-ui-api-java-inte########################################### [100%]<br />
75:edg-wl-common-api-java-########################################### [100%]<br />
76:edg-info-service ########################################### [100%]<br />
77:glite-apel-publisher ########################################### [100%]<br />
78:glite-rgma-gin ########################################### [100%]<br />
79:glite-rgma-log4j ########################################### [100%]<br />
error: unpacking of archive failed on file /opt/glite/share/doc/rgma-log4j/html/overview-tree.html;45f94f10: cpio: read<br />
There was a fatal RPM install error. The message was:<br />
There was a rpm unpack error installing the package: glite-rgma-log4j-5.0.2-1<br />
<br />
</pre><br />
<br />
* Install package glite-rgma-log4j-5.0.2-1<br />
<br />
<pre><br />
<br />
[root@hep-ce aggarwa]# up2date-nox -u glite-rgma-log4j --nosig<br />
<br />
Fetching Obsoletes list for channel: rhel-i386-as-3...<br />
<br />
Fetching Obsoletes list for channel: glite3...<br />
<br />
Fetching Obsoletes list for channel: lcg2_CA...<br />
<br />
Fetching rpm headers...<br />
########################################<br />
<br />
Name Version Rel<br />
----------------------------------------------------------<br />
glite-rgma-log4j 5.0.2 1 noarch<br />
<br />
<br />
Testing package set / solving RPM inter-dependencies...<br />
########################################<br />
error: rpmts_HdrFromFdno: MD5 digest: BAD Expected(6b81ac1c12fa8ca297a0c5c3a3bbf4bc) != (ffbca2a0aa5db9034a59bfb485f6bb52)<br />
glite-rgma-log4j-5.0.2-1.no ########################## Done.<br />
Preparing ########################################### [100%]<br />
<br />
Installing...<br />
1:glite-rgma-log4j ########################################### [100%]<br />
[root@hep-ce aggarwa]#<br />
<br />
</pre><br />
<br />
* up2date-nox -u glite-yaim --nosig</div>
Aggarwa
https://www.gridpp.ac.uk/wiki/New_SGE_cluster_installation
New SGE cluster installation
2007-01-15T14:44:14Z
<p>Aggarwa: </p>
<hr />
<div><br />
== CE installation ==<br />
<br />
* Install lcg-CE rpms<br />
* up2date-nox -u lcg-CE<br />
* install perl-Net-LDAP<br />
* Re-try lcg-CE installation<br />
<br />
<pre><br />
[root@ce00 root]# up2date-nox -u lcg-CE --nosig<br />
<br />
Fetching Obsoletes list for channel: rhel-i386-as-3...<br />
<br />
Fetching Obsoletes list for channel: rhel-i386-as-3-extras...<br />
<br />
Fetching Obsoletes list for channel: ic-hep-as3-i386...<br />
<br />
Fetching Obsoletes list for channel: rhel-i386-as-3-fastrack...<br />
<br />
Fetching Obsoletes list for channel: glite3...<br />
<br />
Fetching Obsoletes list for channel: lcg2_CA...<br />
<br />
Fetching rpm headers...<br />
########################################<br />
<br />
Name Version Rel <br />
----------------------------------------------------------<br />
lcg-CE 3.0.5 0 noarch<br />
<br />
<br />
Testing package set / solving RPM inter-dependencies...<br />
<br />
Downloading headers to solve dependencies...<br />
#######################################<br />
Downloading headers to solve dependencies...<br />
#######################################<br />
Downloading headers to solve dependencies...<br />
#######################################<br />
Downloading headers to solve dependencies...<br />
########################################<br />
The following packages were added to your selection to satisfy dependencies:<br />
<br />
Name Version Release<br />
--------------------------------------------------------------<br />
CASTOR-client 1.7.1.5 1.longname <br />
CGSI_gSOAP_2.3 1.1.5 1 <br />
CGSI_gSOAP_2.6 1.1.15 6 <br />
MySQL-client 4.1.11 0 <br />
MySQL-devel 4.1.11 0 <br />
MySQL-server 4.1.11 0 <br />
MySQL-shared 4.0.25 sl3 <br />
ares-devel 1.1.1 cel3 <br />
bdii 3.8.1 1_sl3 <br />
boost-g3 1.29.1 06vh_sl3 <br />
bouncycastle-jdk14 1.19 2 <br />
classads-g3 0.9.4 vh7_sl3 <br />
classads-jar 1.1 2 <br />
cleanup-grid-accounts 1.0.1 1 <br />
cog-jar 1.1 1 <br />
commons-cli 1.0_beta2_edg 2edg <br />
commons-logging 1.0.2 12 <br />
cppunit 1.10.2 3 <br />
edg-allschema-config 0.2.1 1 <br />
edg-brokerinfo_gcc3_2_2 2.1 5_sl3 <br />
edg-fabricMonitoring 2.5.4 4 <br />
edg-gpt-profile 1.0.0 1 <br />
edg-gridftp-client 1.2.5 1 <br />
edg-gridftpd 1.1.2 1_sl3 <br />
edg-info-ce lcg2.6.39 1_sl3 <br />
edg-info-main lcg3.0.23 1_sl3 <br />
edg-info-service 1.0.0 1 <br />
edg-java-data-util 1.3.22 1_sl3 <br />
edg-java-security 1.5.11 1_sl3 <br />
edg-java-security-client 1.5.11 1_sl3 <br />
edg-java-security-test 1.5.11 1_sl3 <br />
edg-lcas_gcc3_2_2 1.1.22 1_sl3 <br />
edg-lcas_gcc3_2_2-interface 1.0.3 1_sl3 <br />
edg-lcas_gcc3_2_2-voms_plugins 1.1.22 1_sl3 <br />
edg-lcmaps_gcc3_2_2 0.0.30 1_sl3 <br />
edg-lcmaps_gcc3_2_2-basic_plugins 0.0.30 1_sl3 <br />
edg-lcmaps_gcc3_2_2-dummy_plugins 0.0.30 1_sl3 <br />
edg-lcmaps_gcc3_2_2-interface 0.0.1 1_sl3 <br />
edg-lcmaps_gcc3_2_2-voms_plugins 0.0.30 1_sl3 <br />
edg-mkgridmap 2.6.1 1_sl3 <br />
edg-mkgridmap-conf 2.6.1 1_sl3 <br />
edg-netconf 1.1.3 1_sl3 <br />
edg-netmon-info-provider 1.0.8 1_sl3 <br />
edg-pool2info 1.0.1 1_sl3 <br />
edg-profile 2.0.9 1 <br />
edg-wl-bypass_gcc3_2_2 lcg2.5.3 29_sl3 <br />
edg-wl-chkpt-api_gcc3_2_2 lcg2.1.74 3_sl3 <br />
edg-wl-common-api-java-interface_gcc3_2_2lcg2.1.74 3_sl3 <br />
edg-wl-common-api-java_gcc3_2_2 lcg2.1.74 3_sl3 <br />
edg-wl-common-api_gcc3_2_2 lcg2.1.74 3_sl3 <br />
edg-wl-config_gcc3_2_2 lcg2.1.74 3_sl3 <br />
edg-wl-locallogger_gcc3_2_2 lcg2.1.74 3_sl3 <br />
edg-wl-logging-api-c_gcc3_2_2 lcg2.1.74 3_sl3 <br />
edg-wl-logging-api-cpp_gcc3_2_2 lcg2.1.74 3_sl3 <br />
edg-wl-logging-api-sh_gcc3_2_2 lcg2.1.74 3_sl3 <br />
edg-wl-services-common_gcc3_2_2 lcg2.1.74 3_sl3 <br />
edg-wl-ui-api-cpp_gcc3_2_2 lcg2.1.74 3_sl3 <br />
edg-wl-ui-api-java-interface_gcc3_2_2 lcg2.1.74 3_sl3 <br />
edg-wl-ui-api-java_gcc3_2_2 lcg2.1.74 3_sl3 <br />
edg-wl-ui-cli_gcc3_2_2 lcg2.1.74 3_sl3 <br />
edg-wl-ui-config_gcc3_2_2 lcg2.1.74 3_sl3 <br />
edg-wl-ui-gui_gcc3_2_2 lcg2.1.74 3_sl3 <br />
edg_gatekeeper_gcc3_2_2-gcc32dbg_pgm 2.2.15 1_sl3 <br />
fetch-crl 2.0 1 <br />
gacl 0.9.2 1_gcc3_2_2_sl3 <br />
glite-apel-core 1.0.1 0 <br />
glite-apel-lsf 1.0.0 1 <br />
glite-apel-pbs 1.0.0 1 <br />
glite-apel-publisher 1.0.0 1 <br />
glite-essentials-cpp 1.1.1 1_EGEE <br />
glite-essentials-java 1.2.0 2_EGEE <br />
glite-rgma-api-c 5.0.8 1 <br />
glite-rgma-api-cpp 5.0.13 1 <br />
glite-rgma-api-java 5.0.3 1 <br />
glite-rgma-api-python 5.0.7 1 <br />
glite-rgma-base 5.0.6 1 <br />
glite-rgma-command-line 5.0.3 1 <br />
glite-rgma-gin 5.0.7 1 <br />
glite-rgma-log4cpp 5.0.3 1 <br />
glite-rgma-log4j 5.0.2 1 <br />
glite-rgma-stubs-servlet-java 5.0.5 1 <br />
glite-security-trustmanager 1.8.3 1 <br />
glite-security-util-java 1.3.4 1 <br />
glite-security-voms-admin-client 1.2.13 1 <br />
glite-security-voms-admin-interface 1.0.3 1 <br />
glite-security-voms-api 1.6.16 3 <br />
glite-security-voms-api-c 1.6.16 4 <br />
glite-security-voms-api-cpp 1.6.16 4 <br />
glite-security-voms-clients 1.6.16 2 <br />
globus-config 0.23 1.lcg <br />
globus-initialization 2.2.4 5 <br />
glue-schema 1.2.2 1_sl3 <br />
gpt VDT1.2.2rh9 1 <br />
gridice-sensor 1.6.0 23 <br />
gsiopenssh VDT1.2.2rh9 1 <br />
gssklog-cern 0.10 1 <br />
j2sdk_profile 1.4.2_08 sl3 <br />
jakarta-axis 1.1rc2 3 <br />
jakarta-commons-logging 1.0.2 lcg1_sl3 <br />
jas-jar 1.0.0 1 <br />
jug 1.0.2_edg edg2 <br />
jxUtil-jar 1.0.1 1 <br />
lcg-auditlog 1.1.1 1_sl3 <br />
lcg-expiregridmapdir 2.0.0 1 <br />
lcg-extra-jobmanagers 1.1.8 1_sl3 <br />
lcg-info-dynamic-condor 1.1.1 1_sl3 <br />
lcg-info-dynamic-lsf 1.0.9 3_sl3 <br />
lcg-info-dynamic-pbs 1.0.12 1_sl3 <br />
lcg-info-dynamic-scheduler-generic 1.6.1 1 <br />
lcg-info-dynamic-scheduler-pbs 1.6.0 1 <br />
lcg-info-dynamic-software 1.0.3 1_sl3 <br />
lcg-info-generic 1.0.22 1_sl3 <br />
lcg-info-provider-software 1.0.5 1_sl3 <br />
lcg-info-templates 1.0.15 1_sl3 <br />
lcg-lcas-lcmaps 1.1.1 1 <br />
lcg-pbs-utils 1.0.0 1 <br />
lcg-schema 1.2.1 1_sl3 <br />
lcg-tank-gcc32dbg 2.0 1_sl3 <br />
lcg-tankspark-conf 2.0 2_sl3 <br />
lcg-version 3.0.2 1 <br />
lcg-vomscert-na48 1.0.0 1 <br />
lcg-vomscerts 4.2.0 1 <br />
libstdc++-ssa 3.5ssa 0.20030801.48 <br />
log4j 1.2.6 1jpp <br />
mm.mysql 2.0.14 1edg <br />
mpich 1.2.6 1.sl3.cl <br />
mpiexec 0.77 3.sl3 <br />
myproxy VDT1.2.2rh9 1 <br />
myproxy-config 1.1.8 13.edg1 <br />
mysql++_1.7.9_mysql.4.0.13__LCG_rh73_gcc321 1 <br />
netlogger-jar 1.0.0 1 <br />
perl-Crypt-SSLeay 0.51 4 <br />
perl-File-Tail 0.98 cel3 <br />
perl-IO-Socket-SSL 0.96 sl3 <br />
perl-Net-SSLeay 1.23 0.dag.rhel3 <br />
perl-SOAP-Lite 0.55 sl3 <br />
perl-TermReadKey 2.20 12 <br />
perl-Tie-Syslog 1.07 1 <br />
perl-Time-HiRes 1.38 3 <br />
perl-TimeDate 1.16 3_1.el3.at <br />
perl-XML-SAX-Base 1.04 1 <br />
python-logging 0.4.6 1 <br />
swig-runtime 1.3.21 1_EGEE <br />
torque 1.0.1p6 11.SL30X.st <br />
uberftp-client VDT1.2.2rh9_LCG2 <br />
vdt_globus_data_server VDT1.2.2rh9_LCG1 <br />
vdt_globus_essentials VDT1.2.2rh9_LCG2 <br />
vdt_globus_info_client VDT1.2.2rh9 1 <br />
vdt_globus_info_essentials VDT1.2.2rh9 1 <br />
vdt_globus_info_server VDT1.2.2rh9 1 <br />
vdt_globus_jobmanager_condor VDT1.2.2rh9 1 <br />
vdt_globus_jobmanager_lsf VDT1.2.2rh9 1 <br />
vdt_globus_jobmanager_pbs VDT1.2.2rh9 1 <br />
vdt_globus_rls_client VDT1.2.2rh9 1 <br />
vdt_globus_rm_client VDT1.2.2rh9 1 <br />
vdt_globus_rm_essentials VDT1.2.2rh9 1 <br />
vdt_globus_rm_server VDT1.2.2rh9 1 <br />
vdt_globus_sdk VDT1.2.2rh9_LCG2 <br />
voms-client_gcc3_2_2 1.5.4 2_sl3 <br />
xerces-c 1.7.0 sl3 <br />
xerces-j1 1.4.4 12jpp <br />
xml-commons 1.0 0.b2.3jpp_sl3 <br />
xml-commons-apis 1.0 0.b2.3jpp_sl3 <br />
libgcj-ssa 3.5ssa 0.20030801.48 <br />
redhat-java-rpm-scripts 1.0.2 2 <br />
<br />
<br />
....<br />
<br />
<br />
</pre><br />
* Install latest version of yaim [ glite-yaim-3.0.0-11.noarch.rpm ]<br />
* Install lcg-CA rpms (up2date-nox -u lcg-CA --nosig)<br />
* Change users to lt2- prefix in yaim functions.<br />
<br />
* Rest of the installation same as IC-LeSC [https://www.gridpp.ac.uk/wiki/IC-LeSC#Deployment_of_gLite-3_0_0]<br />
* For WN's just change the environment settings for SGE to different location.<br />
<br />
== Site problems ==<br />
<br />
=== SFT problem ===<br />
<br />
* SAM tests seems to be using 1.8GB of virtual memory. The job was killed by SGE.<br />
* Solution : Removed memory limit from SGE. Also added email-address in sge.pm for aborted jobs.<br />
<br />
=== Information system unstable ===<br />
<br />
* Change rgma-gin config on the site CE [https://www.gridpp.ac.uk/wiki/RALPP_Logbook_200610]<br />
* Setup priority for processes using renice.<br />
<pre><br />
#!/bin/bash<br />
for i in `pgrep -f "grid_manager_monitor_agent"`; do renice +18 -p $i; done<br />
for i in `pgrep -f "grid-monitor-job-status"`; do renice +18 -p $i; done<br />
for i in `pgrep -f "globus-job-manager"`; do renice +18 -p $i; done<br />
</pre><br />
<br />
* Check the information plugin for the new version.<br />
* Change bdii.conf <br />
<pre><br />
BDII_SEARCH_TIMEOUT=120<br />
BDII_BREATHE_TIME=240<br />
Restart bdii service.<br />
</pre><br />
* Change the cachetime in /opt/globus/etc/grid-info-resource-ldif.conf<br />
<br />
<pre><br />
# This file was automatically generated by globus-mds startup script. Do not modify.<br />
<br />
dn: Mds-Vo-name=local,o=grid<br />
objectclass: GlobusTop<br />
objectclass: GlobusActiveObject<br />
objectclass: GlobusActiveSearch<br />
type: exec<br />
path: /opt/lcg/libexec/<br />
base: lcg-info-wrapper<br />
args:<br />
cachetime: 60<br />
timelimit: 20<br />
sizelimit: 250<br />
</pre></div>
Aggarwa
https://www.gridpp.ac.uk/wiki/CMS_transfer_log_for_all_TIER1_sites
CMS transfer log for all TIER1 sites
2006-04-05T11:19:42Z
<p>Aggarwa: </p>
<hr />
<div>== IC-HEP -> T1_RAL ==<br />
<br />
=== IC-UI (Success)===<br />
<pre><br />
[aggarwa@gfe03 dcache]$ srmcp srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer srm:/<br />
/dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/TEST-FILES-GO-IN-HERE/test.transfer1 <br />
Storage Resource Manager (SRM) CP Client version 1.20<br />
Copyright (c) 2002-2005 Fermi National Accelerator Laboratory<br />
<br />
SRM Configuration:<br />
debug=true<br />
gsissl=true<br />
help=false<br />
pushmode=false<br />
userproxy=true<br />
buffer_size=131072<br />
tcp_buffer_size=1048576<br />
stream_num=1<br />
config_file=/home/aggarwa/.srmconfig/config.xml<br />
glue_mapfile=/opt/d-cache/srm/conf/SRMServerV1.map<br />
webservice_path=srm/managerv1.wsdl<br />
webservice_protocol=https<br />
gsiftpclinet=globus-url-copy<br />
protocols_list=http,gsiftp<br />
save_config_file=null<br />
srmcphome=/opt/d-cache/srm<br />
urlcopy=/opt/d-cache/srm/sbin/url-copy.sh<br />
x509_user_cert=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_key=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_proxy=/tmp/x509up_u35225<br />
x509_user_trusted_certificates=/etc/grid-security/certificates<br />
globus_tcp_port_range=null<br />
gss_expected_name=null<br />
retry_num=20<br />
retry_timeout=10000<br />
wsdl_url=null<br />
use_urlcopy_script=true<br />
connect_to_wsdl=false<br />
delegate=true<br />
full_delegation=true<br />
from[0]=srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer<br />
to=srm://dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/TEST-FILES-GO-IN-HERE/test.transfer1<br />
<br />
Tue Apr 04 11:57:38 BST 2006: starting SRMCopyPullClient<br />
Tue Apr 04 11:57:38 BST 2006: In SRMClient ExpectedName: host<br />
Tue Apr 04 11:57:38 BST 2006: SRMClient(https,srm/managerv1.wsdl,true)<br />
user credentials are: /C=UK/O=eScience/OU=Imperial/L=Physics/CN=stuart wakefield<br />
SRMClientV1 : connecting to srm at httpg://dcache.gridpp.rl.ac.uk:8443/srm/managerv1<br />
Tue Apr 04 11:57:39 BST 2006: connected to server, obtaining proxy<br />
Tue Apr 04 11:57:39 BST 2006: got proxy of type class org.dcache.srm.client.SRMClientV1<br />
Tue Apr 04 11:57:39 BST 2006: copying srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer into srm://dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/TEST-FILES-GO-IN-HERE/test.transfer1<br />
SRMClientV1 : copy, srcSURLS[0]="srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer"<br />
SRMClientV1 : copy, destSURLS[0]="srm://dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/TEST-FILES-GO-IN-HERE/test.transfer1"<br />
SRMClientV1 : copy, contacting service httpg://dcache.gridpp.rl.ac.uk:8443/srm/managerv1<br />
Tue Apr 04 11:57:44 BST 2006: srm returned requestId = -2145270894<br />
Tue Apr 04 11:57:44 BST 2006: sleeping 1 seconds ...<br />
Tue Apr 04 11:57:46 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 11:57:52 BST 2006: FileRequestStatus fileID = -2145270893 is Done => copying of srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer is complete<br />
[aggarwa@gfe03 dcache]$ <br />
</pre><br />
<br />
<br />
=== CERN-UI (Success)===<br />
<pre><br />
<br />
[lxplus061] /afs/cern.ch/user/s/swakef > srmcp --debug=true srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer srm://dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/TEST-FILES-GO<br />
-IN-HERE/test.transfer3<br />
Storage Resource Manager (SRM) CP Client version 1.20<br />
Copyright (c) 2002-2005 Fermi National Accelerator Laboratory<br />
<br />
SRM Configuration:<br />
debug=true<br />
gsissl=true<br />
help=false<br />
pushmode=false<br />
userproxy=true<br />
buffer_size=131072<br />
tcp_buffer_size=0<br />
stream_num=10<br />
config_file=/afs/cern.ch/user/s/swakef/.srmconfig/config.xml<br />
glue_mapfile=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm/conf/SRMServerV1.map<br />
webservice_path=srm/managerv1.wsdl<br />
webservice_protocol=https<br />
gsiftpclinet=globus-url-copy<br />
protocols_list=http,gsiftp<br />
save_config_file=null<br />
srmcphome=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm<br />
urlcopy=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm/sbin/url-copy.sh<br />
x509_user_cert=/afs/cern.ch/user/s/swakef/.globus/usercert.pem<br />
x509_user_key=/afs/cern.ch/user/s/swakef/.globus/userkey.pem<br />
x509_user_proxy=/tmp/x509up_u5146<br />
x509_user_trusted_certificates=/etc/grid-security/certificates<br />
globus_tcp_port_range=null<br />
gss_expected_name=null<br />
retry_num=20<br />
retry_timeout=10000<br />
wsdl_url=null<br />
use_urlcopy_script=false<br />
connect_to_wsdl=false<br />
delegate=true<br />
full_delegation=true<br />
from[0]=srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer<br />
to=srm://dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/TEST-FILES-GO-IN-HERE/test.transfer3<br />
<br />
Tue Apr 04 17:58:02 CEST 2006: starting SRMCopyPullClient<br />
Tue Apr 04 17:58:02 CEST 2006: In SRMClient ExpectedName: host<br />
Tue Apr 04 17:58:02 CEST 2006: SRMClient(https,srm/managerv1.wsdl,true)<br />
user credentials are: /C=UK/O=eScience/OU=Imperial/L=Physics/CN=stuart wakefield<br />
SRMClientV1 : connecting to srm at httpg://dcache.gridpp.rl.ac.uk:8443/srm/managerv1<br />
Tue Apr 04 17:58:03 CEST 2006: connected to server, obtaining proxy<br />
Tue Apr 04 17:58:03 CEST 2006: got proxy of type class org.dcache.srm.client.SRMClientV1<br />
Tue Apr 04 17:58:03 CEST 2006: copying srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer into srm://dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/TEST-FILES-GO-IN-HERE/test.transfer3<br />
SRMClientV1 : copy, srcSURLS[0]="srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer"<br />
SRMClientV1 : copy, destSURLS[0]="srm://dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/TEST-FILES-GO-IN-HERE/test.transfer3"<br />
SRMClientV1 : copy, contacting service httpg://dcache.gridpp.rl.ac.uk:8443/srm/managerv1<br />
Tue Apr 04 17:58:06 CEST 2006: srm returned requestId = -2145261568<br />
Tue Apr 04 17:58:06 CEST 2006: sleeping 1 seconds ...<br />
Tue Apr 04 17:58:08 CEST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 17:58:12 CEST 2006: FileRequestStatus fileID = -2145261567 is Done => copying of srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer is complete<br />
[lxplus061] /afs/cern.ch/user/s/swakef > <br />
<br />
</pre><br />
<br />
== IC-HEP -> TI_CERN ==<br />
<br />
=== IC-UI (Failed)===<br />
<br />
<pre><br />
<br />
[aggarwa@gfe03 aggarwa]$ srmcp --debug=true srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer srm://srm.cern.ch:8443/srm/managerv1?SFN=/castor/cern.ch/cms/test.transfer1<br />
Storage Resource Manager (SRM) CP Client version 1.20<br />
Copyright (c) 2002-2005 Fermi National Accelerator Laboratory<br />
<br />
SRM Configuration:<br />
debug=true<br />
gsissl=true<br />
help=false<br />
pushmode=false<br />
userproxy=true<br />
buffer_size=131072<br />
tcp_buffer_size=1048576<br />
stream_num=1<br />
config_file=/home/aggarwa/.srmconfig/config.xml<br />
glue_mapfile=/opt/d-cache/srm/conf/SRMServerV1.map<br />
webservice_path=srm/managerv1.wsdl<br />
webservice_protocol=https<br />
gsiftpclinet=globus-url-copy<br />
protocols_list=http,gsiftp<br />
save_config_file=null<br />
srmcphome=/opt/d-cache/srm<br />
urlcopy=/opt/d-cache/srm/sbin/url-copy.sh<br />
x509_user_cert=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_key=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_proxy=/tmp/x509up_u35225<br />
x509_user_trusted_certificates=/etc/grid-security/certificates<br />
globus_tcp_port_range=null<br />
gss_expected_name=null<br />
retry_num=20<br />
retry_timeout=10000<br />
wsdl_url=null<br />
use_urlcopy_script=true<br />
connect_to_wsdl=false<br />
delegate=true<br />
full_delegation=true<br />
from[0]=srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer<br />
to=srm://srm.cern.ch:8443/srm/managerv1?SFN=/castor/cern.ch/cms/test.transfer1<br />
<br />
Wed Apr 05 11:21:11 BST 2006: starting SRMCopyPullClient<br />
Wed Apr 05 11:21:13 BST 2006: In SRMClient ExpectedName: host<br />
Wed Apr 05 11:21:13 BST 2006: SRMClient(https,srm/managerv1.wsdl,true)<br />
user credentials are: /C=UK/O=eScience/OU=Imperial/L=Physics/CN=stuart wakefield<br />
SRMClientV1 : connecting to srm at httpg://lxb6022.cern.ch:8443/srm/managerv1<br />
Wed Apr 05 11:21:23 BST 2006: connected to server, obtaining proxy<br />
Wed Apr 05 11:21:23 BST 2006: got proxy of type class org.dcache.srm.client.SRMClientV1<br />
Wed Apr 05 11:21:23 BST 2006: copying srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer into srm://srm.cern.ch:8443/srm/managerv1?SFN=/castor/cern.ch/cms/test.transfer1<br />
SRMClientV1 : copy, srcSURLS[0]="srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer"<br />
SRMClientV1 : copy, destSURLS[0]="srm://srm.cern.ch:8443/srm/managerv1?SFN=/castor/cern.ch/cms/test.transfer1"<br />
SRMClientV1 : copy, contacting service httpg://lxb6022.cern.ch:8443/srm/managerv1<br />
SRMClientV1 : copy: try # 0 failed with error<br />
SRMClientV1 : java.net.ConnectException: Connection timed out<br />
SRMClientV1 : copy: try again<br />
SRMClientV1 : sleeping for 10000 milliseconds before retrying<br />
SRMClientV1 : copy: try # 1 failed with error<br />
SRMClientV1 : java.net.ConnectException: Connection timed out<br />
SRMClientV1 : copy: try again<br />
SRMClientV1 : sleeping for 20000 milliseconds before retrying<br />
SRMClientV1 : copy: try # 2 failed with error<br />
SRMClientV1 : java.net.ConnectException: Connection timed out<br />
SRMClientV1 : copy: try again<br />
SRMClientV1 : sleeping for 30000 milliseconds before retrying<br />
Wed Apr 05 11:34:28 BST 2006: setting all remaining file statuses of request requestId=0 to "Done"<br />
Wed Apr 05 11:34:28 BST 2006: set all file statuses to "Done"<br />
[aggarwa@gfe03 aggarwa]$ <br />
<br />
</pre><br />
<br />
=== CERN-UI (Failed)===<br />
<br />
<pre><br />
<br />
[lxplus017] /afs/cern.ch/user/s/swakef > srmcp --debug=true srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=<br />
/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer srm://srm.cern.ch:8443/srm/managerv1?SFN=/castor/cern.ch/cms/test.transfer2<br />
Storage Resource Manager (SRM) CP Client version 1.20<br />
Copyright (c) 2002-2005 Fermi National Accelerator Laboratory<br />
<br />
SRM Configuration:<br />
debug=true<br />
gsissl=true<br />
help=false<br />
pushmode=false<br />
userproxy=true<br />
buffer_size=131072<br />
tcp_buffer_size=0<br />
stream_num=10<br />
config_file=/afs/cern.ch/user/s/swakef/.srmconfig/config.xml<br />
glue_mapfile=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm/conf/SRMServerV1.map<br />
webservice_path=srm/managerv1.wsdl<br />
webservice_protocol=https<br />
gsiftpclinet=globus-url-copy<br />
protocols_list=http,gsiftp<br />
save_config_file=null<br />
srmcphome=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm<br />
urlcopy=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm/sbin/url-copy.sh<br />
x509_user_cert=/afs/cern.ch/user/s/swakef/.globus/usercert.pem<br />
x509_user_key=/afs/cern.ch/user/s/swakef/.globus/userkey.pem<br />
x509_user_proxy=/tmp/x509up_u5146<br />
x509_user_trusted_certificates=/etc/grid-security/certificates<br />
globus_tcp_port_range=null<br />
gss_expected_name=null<br />
retry_num=20<br />
retry_timeout=10000<br />
wsdl_url=null<br />
use_urlcopy_script=false<br />
connect_to_wsdl=false<br />
delegate=true<br />
full_delegation=true<br />
from[0]=srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer<br />
to=srm://srm.cern.ch:8443/srm/managerv1?SFN=/castor/cern.ch/cms/test.transfer2<br />
<br />
Wed Apr 05 12:54:46 CEST 2006: starting SRMCopyPullClient<br />
Wed Apr 05 12:54:46 CEST 2006: In SRMClient ExpectedName: host<br />
Wed Apr 05 12:54:46 CEST 2006: SRMClient(https,srm/managerv1.wsdl,true)<br />
user credentials are: /C=UK/O=eScience/OU=Imperial/L=Physics/CN=stuart wakefield<br />
SRMClientV1 : connecting to srm at httpg://lxb6024.cern.ch:8443/srm/managerv1<br />
Wed Apr 05 12:54:48 CEST 2006: connected to server, obtaining proxy<br />
Wed Apr 05 12:54:48 CEST 2006: got proxy of type class org.dcache.srm.client.SRMClientV1<br />
Wed Apr 05 12:54:48 CEST 2006: copying srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer into srm://srm.cern.ch:8443/srm/managerv1?SFN=/castor/cern.ch/cms/test.transfer2<br />
SRMClientV1 : copy, srcSURLS[0]="srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer"<br />
SRMClientV1 : copy, destSURLS[0]="srm://srm.cern.ch:8443/srm/managerv1?SFN=/castor/cern.ch/cms/test.transfer2"<br />
SRMClientV1 : copy, contacting service httpg://lxb6024.cern.ch:8443/srm/managerv1<br />
Wed Apr 05 12:54:54 CEST 2006: srm returned requestId = 872547791<br />
Wed Apr 05 12:54:54 CEST 2006: sleeping 6 seconds ...<br />
Wed Apr 05 12:55:00 CEST 2006: rfs.state is failed calling setFileStatus(872547791,0,"Done")<br />
Exception in thread "main" java.io.IOException: Request with requestId =872547791 rs.state = Failed rs.error = CastorSRMCopyInterface.c:618 CCI_doCopy() put(srm://srm.cern.ch:8443/srm/managerv1?SFN=/castor/cern.ch/cms/test.transfer2): CastorStagerInterface.c:2457 Permission denied (errno=0, serrno=0)<br />
at gov.fnal.srm.util.SRMCopyPullClient.start(SRMCopyPullClient.java:276)<br />
at gov.fnal.srm.util.SRMCopy.work(SRMCopy.java:450)<br />
at gov.fnal.srm.util.SRMCopy.main(SRMCopy.java:248)<br />
Wed Apr 05 12:55:01 CEST 2006: setting all remaining file statuses of request requestId=872547791 to "Done"<br />
Wed Apr 05 12:55:01 CEST 2006: setting file request 0 status to Done<br />
Wed Apr 05 12:55:01 CEST 2006: set all file statuses to "Done"<br />
[lxplus017] /afs/cern.ch/user/s/swakef > <br />
<br />
</pre><br />
<br />
== IC-HEP -> T1_FNAL ==<br />
<br />
<br />
=== IC-UI (Success)===<br />
<pre><br />
[aggarwa@gfe03 dcache]$ srmcp srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer srm://cmssrm.fnal.gov:8443/srm/managerv1?SFN=/resilient/RobustChallenge/phedex_heartbeat/heartbeat.T2_LT2_IC_HEP.1144138958.1001<br />
Storage Resource Manager (SRM) CP Client version 1.20<br />
Copyright (c) 2002-2005 Fermi National Accelerator Laboratory<br />
<br />
SRM Configuration:<br />
debug=true<br />
gsissl=true<br />
help=false<br />
pushmode=false<br />
userproxy=true<br />
buffer_size=131072<br />
tcp_buffer_size=1048576<br />
stream_num=1<br />
config_file=/home/aggarwa/.srmconfig/config.xml<br />
glue_mapfile=/opt/d-cache/srm/conf/SRMServerV1.map<br />
webservice_path=srm/managerv1.wsdl<br />
webservice_protocol=https<br />
gsiftpclinet=globus-url-copy<br />
protocols_list=http,gsiftp<br />
save_config_file=null<br />
srmcphome=/opt/d-cache/srm<br />
urlcopy=/opt/d-cache/srm/sbin/url-copy.sh<br />
x509_user_cert=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_key=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_proxy=/tmp/x509up_u35225<br />
x509_user_trusted_certificates=/etc/grid-security/certificates<br />
globus_tcp_port_range=null<br />
gss_expected_name=null<br />
retry_num=20<br />
retry_timeout=10000<br />
wsdl_url=null<br />
use_urlcopy_script=true<br />
connect_to_wsdl=false<br />
delegate=true<br />
full_delegation=true<br />
from[0]=srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer<br />
to=srm://cmssrm.fnal.gov:8443/srm/managerv1?SFN=/resilient/RobustChallenge/phedex_heartbeat/heartbeat.T2_LT2_IC_HEP.1144138958.1001<br />
<br />
Tue Apr 04 11:38:02 BST 2006: starting SRMCopyPullClient<br />
Tue Apr 04 11:38:02 BST 2006: In SRMClient ExpectedName: host<br />
Tue Apr 04 11:38:02 BST 2006: SRMClient(https,srm/managerv1.wsdl,true)<br />
user credentials are: /C=UK/O=eScience/OU=Imperial/L=Physics/CN=stuart wakefield<br />
SRMClientV1 : connecting to srm at httpg://cmssrm.fnal.gov:8443/srm/managerv1<br />
Tue Apr 04 11:38:03 BST 2006: connected to server, obtaining proxy<br />
Tue Apr 04 11:38:03 BST 2006: got proxy of type class org.dcache.srm.client.SRMClientV1<br />
Tue Apr 04 11:38:03 BST 2006: copying srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer into srm://cmssrm.fnal.gov:8443/srm/managerv1?SFN=/resilient/RobustChallenge/phedex_heartbeat/heartbeat.T2_LT2_IC_HEP.1144138958.1001<br />
SRMClientV1 : copy, srcSURLS[0]="srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer"<br />
SRMClientV1 : copy, destSURLS[0]="srm://cmssrm.fnal.gov:8443/srm/managerv1?SFN=/resilient/RobustChallenge/phedex_heartbeat/heartbeat.T2_LT2_IC_HEP.1144138958.1001"<br />
SRMClientV1 : copy, contacting service httpg://cmssrm.fnal.gov:8443/srm/managerv1<br />
Tue Apr 04 11:38:06 BST 2006: srm returned requestId = -2144463277<br />
Tue Apr 04 11:38:06 BST 2006: sleeping 1 seconds ...<br />
Tue Apr 04 11:38:08 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 11:38:13 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 11:38:18 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 11:38:23 BST 2006: FileRequestStatus fileID = -2144463276 is Done => copying of srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer is complete<br />
</pre><br />
<br />
=== CERN-UI 1-(Failed)===<br />
<br />
<pre><br />
[lxplus061] /afs/cern.ch/user/s/swakef > srmcp --debug=true srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer srm://cmssrm.fnal.gov:8443/srm/managerv1?SFN=/resilient/RobustChallenge/phedex_heartbeat/heartbeat.transfer1<br />
Storage Resource Manager (SRM) CP Client version 1.20<br />
Copyright (c) 2002-2005 Fermi National Accelerator Laboratory<br />
<br />
SRM Configuration:<br />
debug=true<br />
gsissl=true<br />
help=false<br />
pushmode=false<br />
userproxy=true<br />
buffer_size=131072<br />
tcp_buffer_size=0<br />
stream_num=10<br />
config_file=/afs/cern.ch/user/s/swakef/.srmconfig/config.xml<br />
glue_mapfile=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm/conf/SRMServerV1.map<br />
webservice_path=srm/managerv1.wsdl<br />
webservice_protocol=https<br />
gsiftpclinet=globus-url-copy<br />
protocols_list=http,gsiftp<br />
save_config_file=null<br />
srmcphome=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm<br />
urlcopy=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm/sbin/url-copy.sh<br />
x509_user_cert=/afs/cern.ch/user/s/swakef/.globus/usercert.pem<br />
x509_user_key=/afs/cern.ch/user/s/swakef/.globus/userkey.pem<br />
x509_user_proxy=/tmp/x509up_u5146<br />
x509_user_trusted_certificates=/etc/grid-security/certificates<br />
globus_tcp_port_range=null<br />
gss_expected_name=null<br />
retry_num=20<br />
retry_timeout=10000<br />
wsdl_url=null<br />
use_urlcopy_script=false<br />
connect_to_wsdl=false<br />
delegate=true<br />
full_delegation=true<br />
from[0]=srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer<br />
to=srm://cmssrm.fnal.gov:8443/srm/managerv1?SFN=/resilient/RobustChallenge/phedex_heartbeat/heartbeat.transfer1<br />
<br />
Tue Apr 04 18:08:46 CEST 2006: starting SRMCopyPullClient<br />
Tue Apr 04 18:08:46 CEST 2006: In SRMClient ExpectedName: host<br />
Tue Apr 04 18:08:46 CEST 2006: SRMClient(https,srm/managerv1.wsdl,true)<br />
user credentials are: /C=UK/O=eScience/OU=Imperial/L=Physics/CN=stuart wakefield<br />
SRMClientV1 : connecting to srm at httpg://cmssrm.fnal.gov:8443/srm/managerv1<br />
Tue Apr 04 18:08:47 CEST 2006: connected to server, obtaining proxy<br />
Tue Apr 04 18:08:47 CEST 2006: got proxy of type class org.dcache.srm.client.SRMClientV1<br />
Tue Apr 04 18:08:47 CEST 2006: copying srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer into srm://cmssrm.fnal.gov:8443/srm/managerv1?SFN=/resilient/RobustChallenge/phedex_heartbeat/heartbeat.transfer1<br />
SRMClientV1 : copy, srcSURLS[0]="srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer"<br />
SRMClientV1 : copy, destSURLS[0]="srm://cmssrm.fnal.gov:8443/srm/managerv1?SFN=/resilient/RobustChallenge/phedex_heartbeat/heartbeat.transfer1"<br />
SRMClientV1 : copy, contacting service httpg://cmssrm.fnal.gov:8443/srm/managerv1<br />
Tue Apr 04 18:08:51 CEST 2006: srm returned requestId = -2144462390<br />
Tue Apr 04 18:08:51 CEST 2006: sleeping 1 seconds ...<br />
Tue Apr 04 18:08:53 CEST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 18:08:58 CEST 2006: FileRequestStatus is Failed => copying of srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer has failed<br />
Exception in thread "main" java.io.IOException: Request with requestId =-2144462390 rs.state = Failed rs.error = <br />
RequestFileStatus#-2144462389 failed with error:[ Space Reservation failed: SpaceManagerReserveSpaceMessage.getReturnCode () != 0 =>No Route to cell for packet {uoid=<1144166934870:39313>;path=[>cmswn428.fnal.gov_1@local];msg=Tunnel cell >cmswn428.fnal.gov_1@local< not found at >dCacheDomain<}]<br />
<br />
at gov.fnal.srm.util.SRMCopyPullClient.start(SRMCopyPullClient.java:276)<br />
at gov.fnal.srm.util.SRMCopy.work(SRMCopy.java:450)<br />
at gov.fnal.srm.util.SRMCopy.main(SRMCopy.java:248)<br />
Tue Apr 04 18:08:58 CEST 2006: setting all remaining file statuses of request requestId=-2144462390 to "Done"<br />
Tue Apr 04 18:08:58 CEST 2006: setting file request -2144462389 status to Done<br />
Tue Apr 04 18:09:00 CEST 2006: set all file statuses to "Done"<br />
[lxplus061] /afs/cern.ch/user/s/swakef > <br />
<br />
</pre><br />
<br />
=== CERN-UI 2 -(Success) ===<br />
<br />
As the error message for the first attempt seems to be not enough space, we tried again:<br />
<pre><br />
<br />
[lxplus017] /afs/cern.ch/user/s/swakef > srmcp --debug=true srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer srm://cmssrm.fnal.gov:8443/srm/managerv1?SFN=/resilient/RobustChallenge/phedex_heartbeat/heartbeat.transfer1<br />
Storage Resource Manager (SRM) CP Client version 1.20<br />
Copyright (c) 2002-2005 Fermi National Accelerator Laboratory<br />
<br />
SRM Configuration:<br />
debug=true<br />
gsissl=true<br />
help=false<br />
pushmode=false<br />
userproxy=true<br />
buffer_size=131072<br />
tcp_buffer_size=0<br />
stream_num=10<br />
config_file=/afs/cern.ch/user/s/swakef/.srmconfig/config.xml<br />
glue_mapfile=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm/conf/SRMServerV1.map<br />
webservice_path=srm/managerv1.wsdl<br />
webservice_protocol=https<br />
gsiftpclinet=globus-url-copy<br />
protocols_list=http,gsiftp<br />
save_config_file=null<br />
srmcphome=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm<br />
urlcopy=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm/sbin/url-copy.sh<br />
x509_user_cert=/afs/cern.ch/user/s/swakef/.globus/usercert.pem<br />
x509_user_key=/afs/cern.ch/user/s/swakef/.globus/userkey.pem<br />
x509_user_proxy=/tmp/x509up_u5146<br />
x509_user_trusted_certificates=/etc/grid-security/certificates<br />
globus_tcp_port_range=null<br />
gss_expected_name=null<br />
retry_num=20<br />
retry_timeout=10000<br />
wsdl_url=null<br />
use_urlcopy_script=false<br />
connect_to_wsdl=false<br />
delegate=true<br />
full_delegation=true<br />
from[0]=srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer<br />
to=srm://cmssrm.fnal.gov:8443/srm/managerv1?SFN=/resilient/RobustChallenge/phedex_heartbeat/heartbeat.transfer1<br />
<br />
Wed Apr 05 13:06:18 CEST 2006: starting SRMCopyPullClient<br />
Wed Apr 05 13:06:19 CEST 2006: In SRMClient ExpectedName: host<br />
Wed Apr 05 13:06:19 CEST 2006: SRMClient(https,srm/managerv1.wsdl,true)<br />
user credentials are: /C=UK/O=eScience/OU=Imperial/L=Physics/CN=stuart wakefield<br />
SRMClientV1 : connecting to srm at httpg://cmssrm.fnal.gov:8443/srm/managerv1<br />
Wed Apr 05 13:06:20 CEST 2006: connected to server, obtaining proxy<br />
Wed Apr 05 13:06:20 CEST 2006: got proxy of type class org.dcache.srm.client.SRMClientV1<br />
Wed Apr 05 13:06:20 CEST 2006: copying srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer into srm://cmssrm.fnal.gov:8443/srm/managerv1?SFN=/resilient/RobustChallenge/phedex_heartbeat/heartbeat.transfer1<br />
SRMClientV1 : copy, srcSURLS[0]="srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer"<br />
SRMClientV1 : copy, destSURLS[0]="srm://cmssrm.fnal.gov:8443/srm/managerv1?SFN=/resilient/RobustChallenge/phedex_heartbeat/heartbeat.transfer1"<br />
SRMClientV1 : copy, contacting service httpg://cmssrm.fnal.gov:8443/srm/managerv1<br />
Wed Apr 05 13:06:24 CEST 2006: srm returned requestId = -2144447367<br />
Wed Apr 05 13:06:24 CEST 2006: sleeping 1 seconds ...<br />
Wed Apr 05 13:06:26 CEST 2006: sleeping 4 seconds ...<br />
Wed Apr 05 13:06:32 CEST 2006: sleeping 4 seconds ...<br />
Wed Apr 05 13:06:38 CEST 2006: sleeping 4 seconds ...<br />
Wed Apr 05 13:06:43 CEST 2006: sleeping 4 seconds ...<br />
Wed Apr 05 13:06:49 CEST 2006: sleeping 4 seconds ...<br />
Wed Apr 05 13:06:54 CEST 2006: sleeping 7 seconds ...<br />
Wed Apr 05 13:07:03 CEST 2006: FileRequestStatus fileID = -2144447366 is Done => copying of srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer is complete<br />
[lxplus017] /afs/cern.ch/user/s/swakef > <br />
</pre><br />
<br />
== IC-HEP -> T1_IN2P3 ==<br />
<br />
=== IC-UI (Success)===<br />
<pre><br />
[aggarwa@gfe03 dcache]$ srmcp srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer srm://ccsrm.in2p3.fr:8443/srm/managerv1SFN=/pnfs/in2p3.fr/data/cms/import/heartbit/heartbeat.T2_LT2_IC_HEP.1144138870.7789<br />
Storage Resource Manager (SRM) CP Client version 1.20<br />
Copyright (c) 2002-2005 Fermi National Accelerator Laboratory<br />
<br />
SRM Configuration:<br />
debug=true<br />
gsissl=true<br />
help=false<br />
pushmode=false<br />
userproxy=true<br />
buffer_size=131072<br />
tcp_buffer_size=1048576<br />
stream_num=1<br />
config_file=/home/aggarwa/.srmconfig/config.xml<br />
glue_mapfile=/opt/d-cache/srm/conf/SRMServerV1.map<br />
webservice_path=srm/managerv1.wsdl<br />
webservice_protocol=https<br />
gsiftpclinet=globus-url-copy<br />
protocols_list=http,gsiftp<br />
save_config_file=null<br />
srmcphome=/opt/d-cache/srm<br />
urlcopy=/opt/d-cache/srm/sbin/url-copy.sh<br />
x509_user_cert=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_key=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_proxy=/tmp/x509up_u35225<br />
x509_user_trusted_certificates=/etc/grid-security/certificates<br />
globus_tcp_port_range=null<br />
gss_expected_name=null<br />
retry_num=20<br />
retry_timeout=10000<br />
wsdl_url=null<br />
use_urlcopy_script=true<br />
connect_to_wsdl=false<br />
delegate=true<br />
full_delegation=true<br />
from[0]=srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer<br />
to=srm://ccsrm.in2p3.fr:8443/srm/managerv1?SFN=/pnfs/in2p3.fr/data/cms/import/heartbit/heartbeat.T2_LT2_IC_HEP.1144138870.7789<br />
<br />
Tue Apr 04 11:43:33 BST 2006: starting SRMCopyPullClient<br />
Tue Apr 04 11:43:33 BST 2006: In SRMClient ExpectedName: host<br />
Tue Apr 04 11:43:33 BST 2006: SRMClient(https,srm/managerv1.wsdl,true)<br />
user credentials are: /C=UK/O=eScience/OU=Imperial/L=Physics/CN=stuart wakefield<br />
SRMClientV1 : connecting to srm at httpg://ccsvli15.in2p3.fr:8443/srm/managerv1<br />
Tue Apr 04 11:43:34 BST 2006: connected to server, obtaining proxy<br />
Tue Apr 04 11:43:34 BST 2006: got proxy of type class org.dcache.srm.client.SRMClientV1<br />
Tue Apr 04 11:43:34 BST 2006: copying srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer into srm://ccsrm.in2p3.fr:8443/srm/managerv1?SFN=/pnfs/in2p3.fr/data/cms/import/heartbit/heartbeat.T2_LT2_IC_HEP.1144138870.7789<br />
SRMClientV1 : copy, srcSURLS[0]="srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer"<br />
SRMClientV1 : copy, destSURLS[0]="srm://ccsrm.in2p3.fr:8443/srm/managerv1?SFN=/pnfs/in2p3.fr/data/cms/import/heartbit/heartbeat.T2_LT2_IC_HEP.1144138870.7789"<br />
SRMClientV1 : copy, contacting service httpg://ccsvli15.in2p3.fr:8443/srm/managerv1<br />
Tue Apr 04 11:43:36 BST 2006: srm returned requestId = -2146460704<br />
Tue Apr 04 11:43:36 BST 2006: sleeping 1 seconds ...<br />
Tue Apr 04 11:43:38 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 11:43:42 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 11:43:47 BST 2006: FileRequestStatus fileID = -2146460703 is Done => copying of srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer is complete<br />
[aggarwa@gfe03 dcache]$<br />
<br />
</pre><br />
<br />
=== CERN-UI (Failed)===<br />
<pre><br />
<br />
[lxplus061] /afs/cern.ch/user/s/swakef > srmcp --debug=true srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/h<br />
eartbeat/test.transfer srm://ccsrm.in2p3.fr:8443/srm/managerv1SFN=/pnfs/in2p3.fr/data/cms/import/heartbit/heartbeat.transfer1<br />
Storage Resource Manager (SRM) CP Client version 1.20<br />
Copyright (c) 2002-2005 Fermi National Accelerator Laboratory<br />
<br />
SRM Configuration:<br />
debug=true<br />
gsissl=true<br />
help=false<br />
pushmode=false<br />
userproxy=true<br />
buffer_size=131072<br />
tcp_buffer_size=0<br />
stream_num=10<br />
config_file=/afs/cern.ch/user/s/swakef/.srmconfig/config.xml<br />
glue_mapfile=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm/conf/SRMServerV1.map<br />
webservice_path=srm/managerv1.wsdl<br />
webservice_protocol=https<br />
gsiftpclinet=globus-url-copy<br />
protocols_list=http,gsiftp<br />
save_config_file=null<br />
srmcphome=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm<br />
urlcopy=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm/sbin/url-copy.sh<br />
x509_user_cert=/afs/cern.ch/user/s/swakef/.globus/usercert.pem<br />
x509_user_key=/afs/cern.ch/user/s/swakef/.globus/userkey.pem<br />
x509_user_proxy=/tmp/x509up_u5146<br />
x509_user_trusted_certificates=/etc/grid-security/certificates<br />
globus_tcp_port_range=null<br />
gss_expected_name=null<br />
retry_num=20<br />
retry_timeout=10000<br />
wsdl_url=null<br />
use_urlcopy_script=false<br />
connect_to_wsdl=false<br />
delegate=true<br />
full_delegation=true<br />
from[0]=srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer<br />
to=srm://ccsrm.in2p3.fr:8443/srm/managerv1SFN=/pnfs/in2p3.fr/data/cms/import/heartbit/heartbeat.transfer1<br />
<br />
Tue Apr 04 18:11:28 CEST 2006: starting SRMCopyPullClient<br />
Tue Apr 04 18:11:28 CEST 2006: In SRMClient ExpectedName: host<br />
Tue Apr 04 18:11:28 CEST 2006: SRMClient(https,srm/managerv1.wsdl,true)<br />
user credentials are: /C=UK/O=eScience/OU=Imperial/L=Physics/CN=stuart wakefield<br />
SRMClientV1 : connecting to srm at httpg://ccsvli15.in2p3.fr:8443/srm/managerv1<br />
Tue Apr 04 18:11:29 CEST 2006: connected to server, obtaining proxy<br />
Tue Apr 04 18:11:29 CEST 2006: got proxy of type class org.dcache.srm.client.SRMClientV1<br />
Tue Apr 04 18:11:29 CEST 2006: copying srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer into srm://ccsrm.in2p3.fr:8443/srm/managerv1SFN=/pnfs/in2p3.fr/data/cms/import/heartbit/heartbeat.transfer1<br />
SRMClientV1 : copy, srcSURLS[0]="srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer"<br />
SRMClientV1 : copy, destSURLS[0]="srm://ccsrm.in2p3.fr:8443/srm/managerv1SFN=/pnfs/in2p3.fr/data/cms/import/heartbit/heartbeat.transfer1"<br />
SRMClientV1 : copy, contacting service httpg://ccsvli15.in2p3.fr:8443/srm/managerv1<br />
Tue Apr 04 18:11:31 CEST 2006: srm returned requestId = -2146455597<br />
Tue Apr 04 18:11:31 CEST 2006: sleeping 1 seconds ...<br />
Tue Apr 04 18:11:33 CEST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 18:11:39 CEST 2006: FileRequestStatus is Failed => copying of srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer has failed<br />
Exception in thread "main" java.io.IOException: Request with requestId =-2146455597 rs.state = Failed rs.error = <br />
RequestFileStatus#-2146455596 failed with error:[ we reached the root of the directories, none of the elements exist from the pnfs manager point of vieww, we do not have permission to create this directory tree: ///srm/managerv1SFN=/pnfs/in2p3.fr/data/cms/import/heartbit/heartbeat.transfer1]<br />
<br />
at gov.fnal.srm.util.SRMCopyPullClient.start(SRMCopyPullClient.java:276)<br />
at gov.fnal.srm.util.SRMCopy.work(SRMCopy.java:450)<br />
at gov.fnal.srm.util.SRMCopy.main(SRMCopy.java:248)<br />
Tue Apr 04 18:11:39 CEST 2006: setting all remaining file statuses of request requestId=-2146455597 to "Done"<br />
Tue Apr 04 18:11:39 CEST 2006: setting file request -2146455596 status to Done<br />
Tue Apr 04 18:11:39 CEST 2006: set all file statuses to "Done"<br />
[lxplus061] /afs/cern.ch/user/s/swakef > <br />
</pre><br />
<br />
== IC-HEP -> T1_PIC ==<br />
<br />
=== IC-UI (Success)===<br />
<pre><br />
[aggarwa@gfe03 dcache]$ srmcp srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer srm://castorsrm.pic.es:8443/srm/managerv1?SFN=/castor/pic.es/grid/cms/Test/heartbeat.T2_LT2_IC_HEP.1144139323.5354<br />
Storage Resource Manager (SRM) CP Client version 1.20<br />
Copyright (c) 2002-2005 Fermi National Accelerator Laboratory<br />
<br />
SRM Configuration:<br />
debug=true<br />
gsissl=true<br />
help=false<br />
pushmode=false<br />
userproxy=true<br />
buffer_size=131072<br />
tcp_buffer_size=1048576<br />
stream_num=1<br />
config_file=/home/aggarwa/.srmconfig/config.xml<br />
glue_mapfile=/opt/d-cache/srm/conf/SRMServerV1.map<br />
webservice_path=srm/managerv1.wsdl<br />
webservice_protocol=https<br />
gsiftpclinet=globus-url-copy<br />
protocols_list=http,gsiftp<br />
save_config_file=null<br />
srmcphome=/opt/d-cache/srm<br />
urlcopy=/opt/d-cache/srm/sbin/url-copy.sh<br />
x509_user_cert=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_key=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_proxy=/tmp/x509up_u35225<br />
x509_user_trusted_certificates=/etc/grid-security/certificates<br />
globus_tcp_port_range=null<br />
gss_expected_name=null<br />
retry_num=20<br />
retry_timeout=10000<br />
wsdl_url=null<br />
use_urlcopy_script=true<br />
connect_to_wsdl=false<br />
delegate=true<br />
full_delegation=true<br />
from[0]=srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer<br />
to=srm://castorsrm.pic.es:8443/srm/managerv1?SFN=/castor/pic.es/grid/cms/Test/heartbeat.T2_LT2_IC_HEP.1144139323.5354<br />
<br />
Tue Apr 04 11:46:51 BST 2006: starting SRMCopyPullClient<br />
Tue Apr 04 11:46:51 BST 2006: In SRMClient ExpectedName: host<br />
Tue Apr 04 11:46:51 BST 2006: SRMClient(https,srm/managerv1.wsdl,true)<br />
user credentials are: /C=UK/O=eScience/OU=Imperial/L=Physics/CN=stuart wakefield<br />
SRMClientV1 : connecting to srm at httpg://se10.pic.es:8443/srm/managerv1<br />
Tue Apr 04 11:46:52 BST 2006: connected to server, obtaining proxy<br />
Tue Apr 04 11:46:52 BST 2006: got proxy of type class org.dcache.srm.client.SRMClientV1<br />
Tue Apr 04 11:46:52 BST 2006: copying srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer into srm://castorsrm.pic.es:8443/srm/managerv1?SFN=/castor/pic.es/grid/cms/Test/heartbeat.T2_LT2_IC_HEP.1144139323.5354<br />
SRMClientV1 : copy, srcSURLS[0]="srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer"<br />
SRMClientV1 : copy, destSURLS[0]="srm://castorsrm.pic.es:8443/srm/managerv1?SFN=/castor/pic.es/grid/cms/Test/heartbeat.T2_LT2_IC_HEP.1144139323.5354"<br />
SRMClientV1 : copy, contacting service httpg://se10.pic.es:8443/srm/managerv1<br />
Tue Apr 04 11:47:07 BST 2006: srm returned requestId = 560464781<br />
Tue Apr 04 11:47:07 BST 2006: sleeping 6 seconds ...<br />
Tue Apr 04 11:47:14 BST 2006: sleeping 31 seconds ...<br />
Tue Apr 04 11:47:46 BST 2006: sleeping 31 seconds ...<br />
Tue Apr 04 11:48:17 BST 2006: FileRequestStatus fileID = 0 is Done => copying of srm://castorsrm.pic.es:8443/srm/managerv1?SFN=/castor/pic.es/grid/cms/Test/heartbeat.T2_LT2_IC_HEP.1144139323.5354 is complete<br />
[aggarwa@gfe03 dcache]$ <br />
</pre><br />
<br />
=== CERN-UI (Success)===<br />
<br />
<pre>[lxplus061] /afs/cern.ch/user/s/swakef > srmcp --debug=true srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/h<br />
eartbeat/test.transfer srm://castorsrm.pic.es:8443/srm/managerv1?SFN=/castor/pic.es/grid/cms/Test/heartbeat.transfer1<br />
Storage Resource Manager (SRM) CP Client version 1.20<br />
Copyright (c) 2002-2005 Fermi National Accelerator Laboratory<br />
<br />
SRM Configuration:<br />
debug=true<br />
gsissl=true<br />
help=false<br />
pushmode=false<br />
userproxy=true<br />
buffer_size=131072<br />
tcp_buffer_size=0<br />
stream_num=10<br />
config_file=/afs/cern.ch/user/s/swakef/.srmconfig/config.xml<br />
glue_mapfile=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm/conf/SRMServerV1.map<br />
webservice_path=srm/managerv1.wsdl<br />
webservice_protocol=https<br />
gsiftpclinet=globus-url-copy<br />
protocols_list=http,gsiftp<br />
save_config_file=null<br />
srmcphome=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm<br />
urlcopy=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm/sbin/url-copy.sh<br />
x509_user_cert=/afs/cern.ch/user/s/swakef/.globus/usercert.pem<br />
x509_user_key=/afs/cern.ch/user/s/swakef/.globus/userkey.pem<br />
x509_user_proxy=/tmp/x509up_u5146<br />
x509_user_trusted_certificates=/etc/grid-security/certificates<br />
globus_tcp_port_range=null<br />
gss_expected_name=null<br />
retry_num=20<br />
retry_timeout=10000<br />
wsdl_url=null<br />
use_urlcopy_script=false<br />
connect_to_wsdl=false<br />
delegate=true<br />
full_delegation=true<br />
from[0]=srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer<br />
to=srm://castorsrm.pic.es:8443/srm/managerv1?SFN=/castor/pic.es/grid/cms/Test/heartbeat.transfer1<br />
<br />
Tue Apr 04 18:14:23 CEST 2006: starting SRMCopyPullClient<br />
Tue Apr 04 18:14:23 CEST 2006: In SRMClient ExpectedName: host<br />
Tue Apr 04 18:14:23 CEST 2006: SRMClient(https,srm/managerv1.wsdl,true)<br />
user credentials are: /C=UK/O=eScience/OU=Imperial/L=Physics/CN=stuart wakefield<br />
SRMClientV1 : connecting to srm at httpg://se11.pic.es:8443/srm/managerv1<br />
Tue Apr 04 18:14:24 CEST 2006: connected to server, obtaining proxy<br />
Tue Apr 04 18:14:24 CEST 2006: got proxy of type class org.dcache.srm.client.SRMClientV1<br />
Tue Apr 04 18:14:24 CEST 2006: copying srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer into srm://castorsrm.pic.es:8443/srm/managerv1?SFN=/castor/pic.es/grid/cms/Test/heartbeat.transfer1<br />
SRMClientV1 : copy, srcSURLS[0]="srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer"<br />
SRMClientV1 : copy, destSURLS[0]="srm://castorsrm.pic.es:8443/srm/managerv1?SFN=/castor/pic.es/grid/cms/Test/heartbeat.transfer1"<br />
SRMClientV1 : copy, contacting service httpg://se11.pic.es:8443/srm/managerv1<br />
Tue Apr 04 18:14:29 CEST 2006: srm returned requestId = 650839222<br />
Tue Apr 04 18:14:29 CEST 2006: sleeping 6 seconds ...<br />
Tue Apr 04 18:14:36 CEST 2006: sleeping 31 seconds ...<br />
Tue Apr 04 18:15:07 CEST 2006: FileRequestStatus fileID = 0 is Done => copying of srm://castorsrm.pic.es:8443/srm/managerv1?SFN=/castor/pic.es/grid/cms/Test/heartbeat.transfer1 is complete<br />
[lxplus061] /afs/cern.ch/user/s/swakef > <br />
</pre><br />
<br />
== IC-HEP -> T1_ASGC ==<br />
<br />
=== IC-UI (Failed)===<br />
<br />
<pre>[aggarwa@gfe03 dcache]$ srmcp srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer srm://castor.grid.sinica.edu.tw:8443/srm/managerv1?SFN=/castor/grid.sinica.edu.tw/grid/cms/heartbeat/heartbeat.T2_LT2_IC_HEP.1144153743.4203<br />
Storage Resource Manager (SRM) CP Client version 1.20<br />
Copyright (c) 2002-2005 Fermi National Accelerator Laboratory<br />
<br />
SRM Configuration:<br />
debug=true<br />
gsissl=true<br />
help=false<br />
pushmode=false<br />
userproxy=true<br />
buffer_size=131072<br />
tcp_buffer_size=1048576<br />
stream_num=1<br />
config_file=/home/aggarwa/.srmconfig/config.xml<br />
glue_mapfile=/opt/d-cache/srm/conf/SRMServerV1.map<br />
webservice_path=srm/managerv1.wsdl<br />
webservice_protocol=https<br />
gsiftpclinet=globus-url-copy<br />
protocols_list=http,gsiftp<br />
save_config_file=null<br />
srmcphome=/opt/d-cache/srm<br />
urlcopy=/opt/d-cache/srm/sbin/url-copy.sh<br />
x509_user_cert=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_key=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_proxy=/tmp/x509up_u35225<br />
x509_user_trusted_certificates=/etc/grid-security/certificates<br />
globus_tcp_port_range=null<br />
gss_expected_name=null<br />
retry_num=20<br />
retry_timeout=10000<br />
wsdl_url=null<br />
use_urlcopy_script=true<br />
connect_to_wsdl=false<br />
delegate=true<br />
full_delegation=true<br />
from[0]=srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer<br />
to=srm://castor.grid.sinica.edu.tw:8443/srm/managerv1?SFN=/castor/grid.sinica.edu.tw/grid/cms/heartbeat/heartbeat.T2_LT2_IC_HEP.1144153743.4203<br />
<br />
Tue Apr 04 14:32:43 BST 2006: starting SRMCopyPullClient<br />
Tue Apr 04 14:32:43 BST 2006: In SRMClient ExpectedName: host<br />
Tue Apr 04 14:32:43 BST 2006: SRMClient(https,srm/managerv1.wsdl,true)<br />
user credentials are: /C=UK/O=eScience/OU=Imperial/L=Physics/CN=stuart wakefield<br />
SRMClientV1 : connecting to srm at httpg://castor.grid.sinica.edu.tw:8443/srm/managerv1<br />
Tue Apr 04 14:32:45 BST 2006: connected to server, obtaining proxy<br />
Tue Apr 04 14:32:45 BST 2006: got proxy of type class org.dcache.srm.client.SRMClientV1<br />
Tue Apr 04 14:32:45 BST 2006: copying srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer into srm://castor.grid.sinica.edu.tw:8443/srm/managerv1?SFN=/castor/grid.sinica.edu.tw/grid/cms/heartbeat/heartbeat.T2_LT2_IC_HEP.1144153743.4203<br />
SRMClientV1 : copy, srcSURLS[0]="srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer"<br />
SRMClientV1 : copy, destSURLS[0]="srm://castor.grid.sinica.edu.tw:8443/srm/managerv1?SFN=/castor/grid.sinica.edu.tw/grid/cms/heartbeat/heartbeat.T2_LT2_IC_HEP.1144153743.4203"<br />
SRMClientV1 : copy, contacting service httpg://castor.grid.sinica.edu.tw:8443/srm/managerv1<br />
Tue Apr 04 14:32:55 BST 2006: srm returned requestId = 741279187<br />
Tue Apr 04 14:32:55 BST 2006: sleeping 6 seconds ...<br />
Tue Apr 04 14:33:03 BST 2006: sleeping 31 seconds ...<br />
Tue Apr 04 14:33:36 BST 2006: rfs.state is failed calling setFileStatus(741279187,0,"Done")<br />
Exception in thread "main" java.io.IOException: Request with requestId =741279187 rs.state = Failed rs.error = null<br />
at gov.fnal.srm.util.SRMCopyPullClient.start(SRMCopyPullClient.java:276)<br />
at gov.fnal.srm.util.SRMCopy.work(SRMCopy.java:450)<br />
at gov.fnal.srm.util.SRMCopy.main(SRMCopy.java:248)<br />
Tue Apr 04 14:33:39 BST 2006: setting all remaining file statuses of request requestId=741279187 to "Done"<br />
Tue Apr 04 14:33:39 BST 2006: setting file request 0 status to Done<br />
Tue Apr 04 14:33:41 BST 2006: set all file statuses to "Done"<br />
[aggarwa@gfe03 dcache]$ <br />
</pre><br />
<br />
=== CERN-UI (Failed)===<br />
<br />
<pre><br />
[lxplus061] /afs/cern.ch/user/s/swakef > srmcp --debug=true srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/h<br />
eartbeat/test.transfer srm://castor.grid.sinica.edu.tw:8443/srm/managerv1?SFN=/castor/grid.sinica.edu.tw/grid/cms/heartbeat/heartbeat.transfer1Storage Resource Manager (SRM) CP Client version 1.20<br />
Copyright (c) 2002-2005 Fermi National Accelerator Laboratory<br />
<br />
SRM Configuration:<br />
debug=true<br />
gsissl=true<br />
help=false<br />
pushmode=false<br />
userproxy=true<br />
buffer_size=131072<br />
tcp_buffer_size=0<br />
stream_num=10<br />
config_file=/afs/cern.ch/user/s/swakef/.srmconfig/config.xml<br />
glue_mapfile=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm/conf/SRMServerV1.map<br />
webservice_path=srm/managerv1.wsdl<br />
webservice_protocol=https<br />
gsiftpclinet=globus-url-copy<br />
protocols_list=http,gsiftp<br />
save_config_file=null<br />
srmcphome=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm<br />
urlcopy=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm/sbin/url-copy.sh<br />
x509_user_cert=/afs/cern.ch/user/s/swakef/.globus/usercert.pem<br />
x509_user_key=/afs/cern.ch/user/s/swakef/.globus/userkey.pem<br />
x509_user_proxy=/tmp/x509up_u5146<br />
x509_user_trusted_certificates=/etc/grid-security/certificates<br />
globus_tcp_port_range=null<br />
gss_expected_name=null<br />
retry_num=20<br />
retry_timeout=10000<br />
wsdl_url=null<br />
use_urlcopy_script=false<br />
connect_to_wsdl=false<br />
delegate=true<br />
full_delegation=true<br />
from[0]=srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer<br />
to=srm://castor.grid.sinica.edu.tw:8443/srm/managerv1?SFN=/castor/grid.sinica.edu.tw/grid/cms/heartbeat/heartbeat.transfer1<br />
<br />
Tue Apr 04 18:19:22 CEST 2006: starting SRMCopyPullClient<br />
Tue Apr 04 18:19:22 CEST 2006: In SRMClient ExpectedName: host<br />
Tue Apr 04 18:19:22 CEST 2006: SRMClient(https,srm/managerv1.wsdl,true)<br />
user credentials are: /C=UK/O=eScience/OU=Imperial/L=Physics/CN=stuart wakefield<br />
SRMClientV1 : connecting to srm at httpg://castor.grid.sinica.edu.tw:8443/srm/managerv1<br />
Tue Apr 04 18:19:23 CEST 2006: connected to server, obtaining proxy<br />
Tue Apr 04 18:19:23 CEST 2006: got proxy of type class org.dcache.srm.client.SRMClientV1<br />
Tue Apr 04 18:19:23 CEST 2006: copying srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer into srm://castor.grid.sinica.edu.tw:8443/srm/managerv1?SFN=/castor/grid.sinica.edu.tw/grid/cms/heartbeat/heartbeat.transfer1<br />
SRMClientV1 : copy, srcSURLS[0]="srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer"<br />
SRMClientV1 : copy, destSURLS[0]="srm://castor.grid.sinica.edu.tw:8443/srm/managerv1?SFN=/castor/grid.sinica.edu.tw/grid/cms/heartbeat/heartbeat.transfer1"<br />
SRMClientV1 : copy, contacting service httpg://castor.grid.sinica.edu.tw:8443/srm/managerv1<br />
Tue Apr 04 18:19:33 CEST 2006: srm returned requestId = 706217168<br />
Tue Apr 04 18:19:33 CEST 2006: sleeping 6 seconds ...<br />
Tue Apr 04 18:19:41 CEST 2006: sleeping 31 seconds ...<br />
Tue Apr 04 18:20:15 CEST 2006: rfs.state is failed calling setFileStatus(706217168,0,"Done")<br />
Exception in thread "main" java.io.IOException: Request with requestId =706217168 rs.state = Failed rs.error = null<br />
at gov.fnal.srm.util.SRMCopyPullClient.start(SRMCopyPullClient.java:276)<br />
at gov.fnal.srm.util.SRMCopy.work(SRMCopy.java:450)<br />
at gov.fnal.srm.util.SRMCopy.main(SRMCopy.java:248)<br />
Tue Apr 04 18:20:17 CEST 2006: setting all remaining file statuses of request requestId=706217168 to "Done"<br />
Tue Apr 04 18:20:17 CEST 2006: setting file request 0 status to Done<br />
Tue Apr 04 18:20:20 CEST 2006: set all file statuses to "Done"<br />
[lxplus061] /afs/cern.ch/user/s/swakef > <br />
<br />
</pre><br />
<br />
== T1_RAL -> IC-HEP ==<br />
<br />
=== IC-UI (Success) ===<br />
<pre><br />
<br />
[aggarwa@gfe03 dcache]$ srmcp srm://dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/TEST-FILES-GO-IN-HERE/test.transfer srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer<br />
Storage Resource Manager (SRM) CP Client version 1.20<br />
Copyright (c) 2002-2005 Fermi National Accelerator Laboratory<br />
<br />
SRM Configuration:<br />
debug=true<br />
gsissl=true<br />
help=false<br />
pushmode=false<br />
userproxy=true<br />
buffer_size=131072<br />
tcp_buffer_size=1048576<br />
stream_num=1<br />
config_file=/home/aggarwa/.srmconfig/config.xml<br />
glue_mapfile=/opt/d-cache/srm/conf/SRMServerV1.map<br />
webservice_path=srm/managerv1.wsdl<br />
webservice_protocol=https<br />
gsiftpclinet=globus-url-copy<br />
protocols_list=http,gsiftp<br />
save_config_file=null<br />
srmcphome=/opt/d-cache/srm<br />
urlcopy=/opt/d-cache/srm/sbin/url-copy.sh<br />
x509_user_cert=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_key=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_proxy=/tmp/x509up_u35225<br />
x509_user_trusted_certificates=/etc/grid-security/certificates<br />
globus_tcp_port_range=null<br />
gss_expected_name=null<br />
retry_num=20<br />
retry_timeout=10000<br />
wsdl_url=null<br />
use_urlcopy_script=true<br />
connect_to_wsdl=false<br />
delegate=true<br />
full_delegation=true<br />
from[0]=srm://dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/TEST-FILES-GO-IN-HERE/test.transfer<br />
to=srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer<br />
<br />
Tue Apr 04 10:53:33 BST 2006: starting SRMCopyPullClient<br />
Tue Apr 04 10:53:33 BST 2006: In SRMClient ExpectedName: host<br />
Tue Apr 04 10:53:33 BST 2006: SRMClient(https,srm/managerv1.wsdl,true)<br />
user credentials are: /C=UK/O=eScience/OU=Imperial/L=Physics/CN=stuart wakefield<br />
SRMClientV1 : connecting to srm at httpg://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1<br />
Tue Apr 04 10:53:35 BST 2006: connected to server, obtaining proxy<br />
Tue Apr 04 10:53:35 BST 2006: got proxy of type class org.dcache.srm.client.SRMClientV1<br />
Tue Apr 04 10:53:35 BST 2006: copying srm://dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/TEST-FILES-GO-IN-HERE/test.transfer into srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer<br />
SRMClientV1 : copy, srcSURLS[0]="srm://dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/TEST-FILES-GO-IN-HERE/test.transfer"<br />
SRMClientV1 : copy, destSURLS[0]="srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer"<br />
SRMClientV1 : copy, contacting service httpg://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1<br />
Tue Apr 04 10:53:39 BST 2006: srm returned requestId = -2147455707<br />
Tue Apr 04 10:53:39 BST 2006: sleeping 1 seconds ...<br />
Tue Apr 04 10:53:40 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 10:53:45 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 10:53:49 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 10:53:54 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 10:53:58 BST 2006: FileRequestStatus fileID = -2147455706 is Done => copying of srm://dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/TEST-FILES-GO-IN-HERE/test.transfer is complete<br />
<br />
</pre><br />
<br />
=== CERN-UI (Success) ===<br />
<br />
<pre><br />
[lxplus061] /afs/cern.ch/user/s/swakef > srmcp --debug=true srm://dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/<br />
TEST-FILES-GO-IN-HERE/test.transfer srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test-heartbeat1Storage Resource Manager (SRM) CP Client version 1.20<br />
Copyright (c) 2002-2005 Fermi National Accelerator Laboratory<br />
<br />
SRM Configuration:<br />
debug=true<br />
gsissl=true<br />
help=false<br />
pushmode=false<br />
userproxy=true<br />
buffer_size=131072<br />
tcp_buffer_size=0<br />
stream_num=10<br />
config_file=/afs/cern.ch/user/s/swakef/.srmconfig/config.xml<br />
glue_mapfile=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm/conf/SRMServerV1.map<br />
webservice_path=srm/managerv1.wsdl<br />
webservice_protocol=https<br />
gsiftpclinet=globus-url-copy<br />
protocols_list=http,gsiftp<br />
save_config_file=null<br />
srmcphome=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm<br />
urlcopy=/afs/cern.ch/project/gd/LCG-share/2.7.0/sl3/d-cache/srm/sbin/url-copy.sh<br />
x509_user_cert=/afs/cern.ch/user/s/swakef/.globus/usercert.pem<br />
x509_user_key=/afs/cern.ch/user/s/swakef/.globus/userkey.pem<br />
x509_user_proxy=/tmp/x509up_u5146<br />
x509_user_trusted_certificates=/etc/grid-security/certificates<br />
globus_tcp_port_range=null<br />
gss_expected_name=null<br />
retry_num=20<br />
retry_timeout=10000<br />
wsdl_url=null<br />
use_urlcopy_script=false<br />
connect_to_wsdl=false<br />
delegate=true<br />
full_delegation=true<br />
from[0]=srm://dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/TEST-FILES-GO-IN-HERE/test.transfer<br />
to=srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test-heartbeat1<br />
<br />
Tue Apr 04 19:03:26 CEST 2006: starting SRMCopyPullClient<br />
Tue Apr 04 19:03:26 CEST 2006: In SRMClient ExpectedName: host<br />
Tue Apr 04 19:03:26 CEST 2006: SRMClient(https,srm/managerv1.wsdl,true)<br />
user credentials are: /C=UK/O=eScience/OU=Imperial/L=Physics/CN=stuart wakefield<br />
SRMClientV1 : connecting to srm at httpg://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1<br />
Tue Apr 04 19:03:27 CEST 2006: connected to server, obtaining proxy<br />
Tue Apr 04 19:03:27 CEST 2006: got proxy of type class org.dcache.srm.client.SRMClientV1<br />
Tue Apr 04 19:03:27 CEST 2006: copying srm://dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/TEST-FILES-GO-IN-HERE/test.transfer into srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test-heartbeat1<br />
SRMClientV1 : copy, srcSURLS[0]="srm://dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/TEST-FILES-GO-IN-HERE/test.transfer"<br />
SRMClientV1 : copy, destSURLS[0]="srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test-heartbeat1"<br />
SRMClientV1 : copy, contacting service httpg://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1<br />
Tue Apr 04 19:03:30 CEST 2006: srm returned requestId = -2147455615<br />
Tue Apr 04 19:03:30 CEST 2006: sleeping 1 seconds ...<br />
Tue Apr 04 19:03:31 CEST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 19:03:36 CEST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 19:03:40 CEST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 19:03:45 CEST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 19:03:49 CEST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 19:03:54 CEST 2006: sleeping 7 seconds ...<br />
Tue Apr 04 19:04:01 CEST 2006: sleeping 7 seconds ...<br />
Tue Apr 04 19:04:08 CEST 2006: sleeping 7 seconds ...<br />
Tue Apr 04 19:04:16 CEST 2006: sleeping 7 seconds ...<br />
Tue Apr 04 19:04:23 CEST 2006: sleeping 7 seconds ...<br />
Tue Apr 04 19:04:31 CEST 2006: sleeping 10 seconds ...<br />
Tue Apr 04 19:04:41 CEST 2006: sleeping 10 seconds ...<br />
Tue Apr 04 19:04:52 CEST 2006: sleeping 10 seconds ...<br />
Tue Apr 04 19:05:02 CEST 2006: sleeping 10 seconds ...<br />
Tue Apr 04 19:05:13 CEST 2006: sleeping 10 seconds ...<br />
Tue Apr 04 19:05:23 CEST 2006: sleeping 13 seconds ...<br />
Tue Apr 04 19:05:37 CEST 2006: sleeping 13 seconds ...<br />
Tue Apr 04 19:05:51 CEST 2006: sleeping 13 seconds ...<br />
Tue Apr 04 19:06:04 CEST 2006: sleeping 13 seconds ...<br />
Tue Apr 04 19:06:17 CEST 2006: sleeping 13 seconds ...<br />
Tue Apr 04 19:06:31 CEST 2006: sleeping 16 seconds ...<br />
Tue Apr 04 19:06:47 CEST 2006: sleeping 16 seconds ...<br />
Tue Apr 04 19:07:04 CEST 2006: sleeping 16 seconds ...<br />
Tue Apr 04 19:07:20 CEST 2006: sleeping 16 seconds ...<br />
Tue Apr 04 19:07:37 CEST 2006: sleeping 16 seconds ...<br />
Tue Apr 04 19:07:53 CEST 2006: sleeping 19 seconds ...<br />
Tue Apr 04 19:08:13 CEST 2006: sleeping 19 seconds ...<br />
Tue Apr 04 19:08:32 CEST 2006: sleeping 19 seconds ...<br />
Tue Apr 04 19:08:51 CEST 2006: sleeping 19 seconds ...<br />
Tue Apr 04 19:09:11 CEST 2006: sleeping 19 seconds ...<br />
Tue Apr 04 19:09:30 CEST 2006: sleeping 22 seconds ...<br />
Tue Apr 04 19:09:53 CEST 2006: sleeping 22 seconds ...<br />
Tue Apr 04 19:10:15 CEST 2006: sleeping 22 seconds ...<br />
Tue Apr 04 19:10:38 CEST 2006: sleeping 22 seconds ...<br />
Tue Apr 04 19:11:00 CEST 2006: sleeping 22 seconds ...<br />
Tue Apr 04 19:11:23 CEST 2006: sleeping 25 seconds ...<br />
Tue Apr 04 19:11:48 CEST 2006: sleeping 25 seconds ...<br />
Tue Apr 04 19:12:14 CEST 2006: sleeping 25 seconds ...<br />
Tue Apr 04 19:12:39 CEST 2006: sleeping 25 seconds ...<br />
Tue Apr 04 19:13:04 CEST 2006: sleeping 25 seconds ...<br />
Tue Apr 04 19:13:30 CEST 2006: sleeping 28 seconds ...<br />
Tue Apr 04 19:13:58 CEST 2006: FileRequestStatus fileID = -2147455614 is Done => copying of srm://dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/TEST-FILES-GO-IN-HERE/test.transfer is complete<br />
</pre><br />
<br />
== TI_CERN -> IC-HEP (Failed) ==<br />
<br />
<pre><br />
[aggarwa@gfe03 dcache]$ srmcp srm://srm.cern.ch:8443/srm/managerv1?SFN=/castor/cern.ch/cms/test.transfer srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/heartbeat.T1_CERN.1144059705.1673<br />
Storage Resource Manager (SRM) CP Client version 1.20<br />
Copyright (c) 2002-2005 Fermi National Accelerator Laboratory<br />
<br />
SRM Configuration:<br />
debug=true<br />
gsissl=true<br />
help=false<br />
pushmode=false<br />
userproxy=true<br />
buffer_size=131072<br />
tcp_buffer_size=1048576<br />
stream_num=1<br />
config_file=/home/aggarwa/.srmconfig/config.xml<br />
glue_mapfile=/opt/d-cache/srm/conf/SRMServerV1.map<br />
webservice_path=srm/managerv1.wsdl<br />
webservice_protocol=https<br />
gsiftpclinet=globus-url-copy<br />
protocols_list=http,gsiftp<br />
save_config_file=null<br />
srmcphome=/opt/d-cache/srm<br />
urlcopy=/opt/d-cache/srm/sbin/url-copy.sh<br />
x509_user_cert=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_key=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_proxy=/tmp/x509up_u35225<br />
x509_user_trusted_certificates=/etc/grid-security/certificates<br />
globus_tcp_port_range=null<br />
gss_expected_name=null<br />
retry_num=20<br />
retry_timeout=10000<br />
wsdl_url=null<br />
use_urlcopy_script=true<br />
connect_to_wsdl=false<br />
delegate=true<br />
full_delegation=true<br />
from[0]=srm://srm.cern.ch:8443/srm/managerv1?SFN=/castor/cern.ch/cms/test.transfer<br />
to=srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/heartbeat.T1_CERN.1144059705.1673<br />
<br />
Tue Apr 04 11:20:11 BST 2006: starting SRMCopyPullClient<br />
Tue Apr 04 11:20:11 BST 2006: In SRMClient ExpectedName: host<br />
Tue Apr 04 11:20:11 BST 2006: SRMClient(https,srm/managerv1.wsdl,true)<br />
user credentials are: /C=UK/O=eScience/OU=Imperial/L=Physics/CN=stuart wakefield<br />
SRMClientV1 : connecting to srm at httpg://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1<br />
Tue Apr 04 11:20:12 BST 2006: connected to server, obtaining proxy<br />
Tue Apr 04 11:20:12 BST 2006: got proxy of type class org.dcache.srm.client.SRMClientV1<br />
Tue Apr 04 11:20:12 BST 2006: copying srm://srm.cern.ch:8443/srm/managerv1?SFN=/castor/cern.ch/cms/test.transfer into srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/heartbeat.T1_CERN.1144059705.1673<br />
SRMClientV1 : copy, srcSURLS[0]="srm://srm.cern.ch:8443/srm/managerv1?SFN=/castor/cern.ch/cms/test.transfer"<br />
SRMClientV1 : copy, destSURLS[0]="srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/heartbeat.T1_CERN.1144059705.1673"<br />
SRMClientV1 : copy, contacting service httpg://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1<br />
Tue Apr 04 11:20:15 BST 2006: srm returned requestId = -2147455701<br />
Tue Apr 04 11:20:15 BST 2006: sleeping 1 seconds ...<br />
Tue Apr 04 11:20:17 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 11:20:21 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 11:20:25 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 11:20:30 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 11:20:34 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 11:20:39 BST 2006: sleeping 7 seconds ...<br />
Tue Apr 04 11:20:46 BST 2006: sleeping 7 seconds ...<br />
Tue Apr 04 11:20:53 BST 2006: sleeping 7 seconds ...<br />
Tue Apr 04 11:21:01 BST 2006: sleeping 7 seconds ...<br />
Tue Apr 04 11:21:08 BST 2006: sleeping 7 seconds ...<br />
Tue Apr 04 11:21:15 BST 2006: sleeping 10 seconds ...<br />
Tue Apr 04 11:21:26 BST 2006: sleeping 10 seconds ...<br />
Tue Apr 04 11:21:36 BST 2006: sleeping 10 seconds ...<br />
Tue Apr 04 11:21:46 BST 2006: sleeping 10 seconds ...<br />
Tue Apr 04 11:21:57 BST 2006: sleeping 10 seconds ...<br />
Tue Apr 04 11:22:07 BST 2006: sleeping 13 seconds ...<br />
Tue Apr 04 11:22:21 BST 2006: sleeping 13 seconds ...<br />
Tue Apr 04 11:22:34 BST 2006: sleeping 13 seconds ...<br />
Tue Apr 04 11:22:47 BST 2006: sleeping 13 seconds ...<br />
Tue Apr 04 11:23:00 BST 2006: sleeping 13 seconds ...<br />
Tue Apr 04 11:23:14 BST 2006: sleeping 16 seconds ...<br />
Tue Apr 04 11:23:30 BST 2006: sleeping 16 seconds ...<br />
Tue Apr 04 11:23:46 BST 2006: sleeping 16 seconds ...<br />
Tue Apr 04 11:24:03 BST 2006: sleeping 16 seconds ...<br />
Tue Apr 04 11:24:19 BST 2006: sleeping 16 seconds ...<br />
Tue Apr 04 11:24:35 BST 2006: sleeping 19 seconds ...<br />
Tue Apr 04 11:24:55 BST 2006: sleeping 19 seconds ...<br />
Tue Apr 04 11:25:14 BST 2006: sleeping 19 seconds ...<br />
Tue Apr 04 11:25:34 BST 2006: sleeping 19 seconds ...<br />
Tue Apr 04 11:25:53 BST 2006: sleeping 19 seconds ...<br />
Tue Apr 04 11:26:12 BST 2006: sleeping 22 seconds ...<br />
Tue Apr 04 11:26:34 BST 2006: sleeping 22 seconds ...<br />
Tue Apr 04 11:26:57 BST 2006: sleeping 22 seconds ...<br />
Tue Apr 04 11:27:19 BST 2006: sleeping 22 seconds ...<br />
Tue Apr 04 11:27:41 BST 2006: sleeping 22 seconds ...<br />
Tue Apr 04 11:28:04 BST 2006: sleeping 25 seconds ...<br />
Tue Apr 04 11:28:29 BST 2006: sleeping 25 seconds ...<br />
Tue Apr 04 11:28:54 BST 2006: sleeping 25 seconds ...<br />
Tue Apr 04 11:29:20 BST 2006: sleeping 25 seconds ...<br />
Tue Apr 04 11:29:45 BST 2006: sleeping 25 seconds ...<br />
Tue Apr 04 11:30:10 BST 2006: sleeping 28 seconds ...<br />
Tue Apr 04 11:30:39 BST 2006: sleeping 28 seconds ...<br />
Tue Apr 04 11:31:07 BST 2006: sleeping 28 seconds ...<br />
Tue Apr 04 11:31:35 BST 2006: sleeping 28 seconds ...<br />
Tue Apr 04 11:32:03 BST 2006: sleeping 28 seconds ...<br />
Tue Apr 04 11:32:32 BST 2006: sleeping 31 seconds ...<br />
Tue Apr 04 11:33:03 BST 2006: FileRequestStatus is Failed => copying of srm://srm.cern.ch:8443/srm/managerv1?SFN=/castor/cern.ch/cms/test.transfer has failed<br />
Exception in thread "main" java.io.IOException: Request with requestId =-2147455701 rs.state = Failed rs.error = <br />
RequestFileStatus#-2147455700 failed with error:[ retrieval of "from" TURL failed with error java.lang.RuntimeException: java.net.ConnectException: Connection timed out]<br />
<br />
at gov.fnal.srm.util.SRMCopyPullClient.start(SRMCopyPullClient.java:276)<br />
at gov.fnal.srm.util.SRMCopy.work(SRMCopy.java:450)<br />
at gov.fnal.srm.util.SRMCopy.main(SRMCopy.java:248)<br />
Tue Apr 04 11:33:03 BST 2006: setting all remaining file statuses of request requestId=-2147455701 to "Done"<br />
Tue Apr 04 11:33:03 BST 2006: setting file request -2147455700 status to Done<br />
Tue Apr 04 11:33:03 BST 2006: set all file statuses to "Done"<br />
[aggarwa@gfe03 dcache]$ <br />
</pre><br />
<br />
== TI_FNAL -> IC-HEP (Success) ==<br />
<br />
<pre><br />
[aggarwa@gfe03 dcache]$ srmcp srm://cmssrm.fnal.gov:8443/srm/managerv1?SFN=/resilient/RobustChallenge/phedex_heartbeat/heartbeat.T2_LT2_IC_HEP.1144138958.1001 srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer1<br />
Storage Resource Manager (SRM) CP Client version 1.20<br />
Copyright (c) 2002-2005 Fermi National Accelerator Laboratory<br />
<br />
SRM Configuration:<br />
debug=true<br />
gsissl=true<br />
help=false<br />
pushmode=false<br />
userproxy=true<br />
buffer_size=131072<br />
tcp_buffer_size=1048576<br />
stream_num=1<br />
config_file=/home/aggarwa/.srmconfig/config.xml<br />
glue_mapfile=/opt/d-cache/srm/conf/SRMServerV1.map<br />
webservice_path=srm/managerv1.wsdl<br />
webservice_protocol=https<br />
gsiftpclinet=globus-url-copy<br />
protocols_list=http,gsiftp<br />
save_config_file=null<br />
srmcphome=/opt/d-cache/srm<br />
urlcopy=/opt/d-cache/srm/sbin/url-copy.sh<br />
x509_user_cert=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_key=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_proxy=/tmp/x509up_u35225<br />
x509_user_trusted_certificates=/etc/grid-security/certificates<br />
globus_tcp_port_range=null<br />
gss_expected_name=null<br />
retry_num=20<br />
retry_timeout=10000<br />
wsdl_url=null<br />
use_urlcopy_script=true<br />
connect_to_wsdl=false<br />
delegate=true<br />
full_delegation=true<br />
from[0]=srm://cmssrm.fnal.gov:8443/srm/managerv1?SFN=/resilient/RobustChallenge/phedex_heartbeat/heartbeat.T2_LT2_IC_HEP.1144138958.1001<br />
to=srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer1<br />
<br />
Tue Apr 04 12:00:59 BST 2006: starting SRMCopyPullClient<br />
Tue Apr 04 12:00:59 BST 2006: In SRMClient ExpectedName: host<br />
Tue Apr 04 12:00:59 BST 2006: SRMClient(https,srm/managerv1.wsdl,true)<br />
user credentials are: /C=UK/O=eScience/OU=Imperial/L=Physics/CN=stuart wakefield<br />
SRMClientV1 : connecting to srm at httpg://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1<br />
Tue Apr 04 12:01:00 BST 2006: connected to server, obtaining proxy<br />
Tue Apr 04 12:01:00 BST 2006: got proxy of type class org.dcache.srm.client.SRMClientV1<br />
Tue Apr 04 12:01:00 BST 2006: copying srm://cmssrm.fnal.gov:8443/srm/managerv1?SFN=/resilient/RobustChallenge/phedex_heartbeat/heartbeat.T2_LT2_IC_HEP.1144138958.1001 into srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer1<br />
SRMClientV1 : copy, srcSURLS[0]="srm://cmssrm.fnal.gov:8443/srm/managerv1?SFN=/resilient/RobustChallenge/phedex_heartbeat/heartbeat.T2_LT2_IC_HEP.1144138958.1001"<br />
SRMClientV1 : copy, destSURLS[0]="srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer1"<br />
SRMClientV1 : copy, contacting service httpg://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1<br />
Tue Apr 04 12:01:03 BST 2006: srm returned requestId = -2147455689<br />
Tue Apr 04 12:01:03 BST 2006: sleeping 1 seconds ...<br />
Tue Apr 04 12:01:04 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 12:01:08 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 12:01:13 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 12:01:17 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 12:01:22 BST 2006: FileRequestStatus fileID = -2147455688 is Done => copying of srm://cmssrm.fnal.gov:8443/srm/managerv1?SFN=/resilient/RobustChallenge/phedex_heartbeat/heartbeat.T2_LT2_IC_HEP.1144138958.1001 is complete<br />
[aggarwa@gfe03 dcache]$ <br />
<br />
</pre><br />
<br />
== TI_IN2P3 -> IC-HEP (Success) ==<br />
<br />
<pre><br />
[aggarwa@gfe03 dcache]$ srmcp srm://ccsrm.in2p3.fr:8443/srm/managerv1?SFN=/pnfs/in2p3.fr/data/cms/import/heartbit/heartbeat.T2_LT2_IC_HEP.1144138870.7789 srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer2<br />
Storage Resource Manager (SRM) CP Client version 1.20<br />
Copyright (c) 2002-2005 Fermi National Accelerator Laboratory<br />
<br />
SRM Configuration:<br />
debug=true<br />
gsissl=true<br />
help=false<br />
pushmode=false<br />
userproxy=true<br />
buffer_size=131072<br />
tcp_buffer_size=1048576<br />
stream_num=1<br />
config_file=/home/aggarwa/.srmconfig/config.xml<br />
glue_mapfile=/opt/d-cache/srm/conf/SRMServerV1.map<br />
webservice_path=srm/managerv1.wsdl<br />
webservice_protocol=https<br />
gsiftpclinet=globus-url-copy<br />
protocols_list=http,gsiftp<br />
save_config_file=null<br />
srmcphome=/opt/d-cache/srm<br />
urlcopy=/opt/d-cache/srm/sbin/url-copy.sh<br />
x509_user_cert=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_key=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_proxy=/tmp/x509up_u35225<br />
x509_user_trusted_certificates=/etc/grid-security/certificates<br />
globus_tcp_port_range=null<br />
gss_expected_name=null<br />
retry_num=20<br />
retry_timeout=10000<br />
wsdl_url=null<br />
use_urlcopy_script=true<br />
connect_to_wsdl=false<br />
delegate=true<br />
full_delegation=true<br />
from[0]=srm://ccsrm.in2p3.fr:8443/srm/managerv1?SFN=/pnfs/in2p3.fr/data/cms/import/heartbit/heartbeat.T2_LT2_IC_HEP.1144138870.7789<br />
to=srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer2<br />
<br />
Tue Apr 04 12:06:21 BST 2006: starting SRMCopyPullClient<br />
Tue Apr 04 12:06:21 BST 2006: In SRMClient ExpectedName: host<br />
Tue Apr 04 12:06:21 BST 2006: SRMClient(https,srm/managerv1.wsdl,true)<br />
user credentials are: /C=UK/O=eScience/OU=Imperial/L=Physics/CN=stuart wakefield<br />
SRMClientV1 : connecting to srm at httpg://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1<br />
Tue Apr 04 12:06:22 BST 2006: connected to server, obtaining proxy<br />
Tue Apr 04 12:06:22 BST 2006: got proxy of type class org.dcache.srm.client.SRMClientV1<br />
Tue Apr 04 12:06:22 BST 2006: copying srm://ccsrm.in2p3.fr:8443/srm/managerv1?SFN=/pnfs/in2p3.fr/data/cms/import/heartbit/heartbeat.T2_LT2_IC_HEP.1144138870.7789 into srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer2<br />
SRMClientV1 : copy, srcSURLS[0]="srm://ccsrm.in2p3.fr:8443/srm/managerv1?SFN=/pnfs/in2p3.fr/data/cms/import/heartbit/heartbeat.T2_LT2_IC_HEP.1144138870.7789"<br />
SRMClientV1 : copy, destSURLS[0]="srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer2"<br />
SRMClientV1 : copy, contacting service httpg://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1<br />
Tue Apr 04 12:06:25 BST 2006: srm returned requestId = -2147455687<br />
Tue Apr 04 12:06:25 BST 2006: sleeping 1 seconds ...<br />
Tue Apr 04 12:06:26 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 12:06:31 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 12:06:35 BST 2006: FileRequestStatus fileID = -2147455686 is Done => copying of srm://ccsrm.in2p3.fr:8443/srm/managerv1?SFN=/pnfs/in2p3.fr/data/cms/import/heartbit/heartbeat.T2_LT2_IC_HEP.1144138870.7789 is complete<br />
</pre><br />
<br />
== T1_PIC -> IC-HEP (Success) ==<br />
<br />
<pre><br />
[aggarwa@gfe03 dcache]$ srmcp srm://castorsrm.pic.es:8443/srm/managerv1?SFN=/castor/pic.es/grid/cms/Test/heartbeat.T2_LT2_IC_HEP.1144139323.5354 srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer3<br />
Storage Resource Manager (SRM) CP Client version 1.20<br />
Copyright (c) 2002-2005 Fermi National Accelerator Laboratory<br />
<br />
SRM Configuration:<br />
debug=true<br />
gsissl=true<br />
help=false<br />
pushmode=false<br />
userproxy=true<br />
buffer_size=131072<br />
tcp_buffer_size=1048576<br />
stream_num=1<br />
config_file=/home/aggarwa/.srmconfig/config.xml<br />
glue_mapfile=/opt/d-cache/srm/conf/SRMServerV1.map<br />
webservice_path=srm/managerv1.wsdl<br />
webservice_protocol=https<br />
gsiftpclinet=globus-url-copy<br />
protocols_list=http,gsiftp<br />
save_config_file=null<br />
srmcphome=/opt/d-cache/srm<br />
urlcopy=/opt/d-cache/srm/sbin/url-copy.sh<br />
x509_user_cert=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_key=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_proxy=/tmp/x509up_u35225<br />
x509_user_trusted_certificates=/etc/grid-security/certificates<br />
globus_tcp_port_range=null<br />
gss_expected_name=null<br />
retry_num=20<br />
retry_timeout=10000<br />
wsdl_url=null<br />
use_urlcopy_script=true<br />
connect_to_wsdl=false<br />
delegate=true<br />
full_delegation=true<br />
from[0]=srm://castorsrm.pic.es:8443/srm/managerv1?SFN=/castor/pic.es/grid/cms/Test/heartbeat.T2_LT2_IC_HEP.1144139323.5354<br />
to=srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer3<br />
<br />
Tue Apr 04 12:09:35 BST 2006: starting SRMCopyPullClient<br />
Tue Apr 04 12:09:35 BST 2006: In SRMClient ExpectedName: host<br />
Tue Apr 04 12:09:35 BST 2006: SRMClient(https,srm/managerv1.wsdl,true)<br />
user credentials are: /C=UK/O=eScience/OU=Imperial/L=Physics/CN=stuart wakefield<br />
SRMClientV1 : connecting to srm at httpg://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1<br />
Tue Apr 04 12:09:36 BST 2006: connected to server, obtaining proxy<br />
Tue Apr 04 12:09:36 BST 2006: got proxy of type class org.dcache.srm.client.SRMClientV1<br />
Tue Apr 04 12:09:36 BST 2006: copying srm://castorsrm.pic.es:8443/srm/managerv1?SFN=/castor/pic.es/grid/cms/Test/heartbeat.T2_LT2_IC_HEP.1144139323.5354 into srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer3<br />
SRMClientV1 : copy, srcSURLS[0]="srm://castorsrm.pic.es:8443/srm/managerv1?SFN=/castor/pic.es/grid/cms/Test/heartbeat.T2_LT2_IC_HEP.1144139323.5354"<br />
SRMClientV1 : copy, destSURLS[0]="srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer3"<br />
SRMClientV1 : copy, contacting service httpg://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1<br />
Tue Apr 04 12:09:38 BST 2006: srm returned requestId = -2147455685<br />
Tue Apr 04 12:09:38 BST 2006: sleeping 1 seconds ...<br />
Tue Apr 04 12:09:40 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 12:09:44 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 12:09:48 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 12:09:53 BST 2006: FileRequestStatus fileID = -2147455684 is Done => copying of srm://castorsrm.pic.es:8443/srm/managerv1?SFN=/castor/pic.es/grid/cms/Test/heartbeat.T2_LT2_IC_HEP.1144139323.5354 is complete<br />
<br />
</pre><br />
<br />
== T1_ASGC -> IC-HEP (Success) ==<br />
<br />
<pre><br />
[aggarwa@gfe03 dcache]$ srmcp srm://castor.grid.sinica.edu.tw:8443/srm/managerv1?SFN=/castor/grid.sinica.edu.tw/grid/cms/heartb<br />
eat/test.transfer srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer4<br />
Storage Resource Manager (SRM) CP Client version 1.20<br />
Copyright (c) 2002-2005 Fermi National Accelerator Laboratory<br />
<br />
SRM Configuration:<br />
debug=true<br />
gsissl=true<br />
help=false<br />
pushmode=false<br />
userproxy=true<br />
buffer_size=131072<br />
tcp_buffer_size=1048576<br />
stream_num=1<br />
config_file=/home/aggarwa/.srmconfig/config.xml<br />
glue_mapfile=/opt/d-cache/srm/conf/SRMServerV1.map<br />
webservice_path=srm/managerv1.wsdl<br />
webservice_protocol=https<br />
gsiftpclinet=globus-url-copy<br />
protocols_list=http,gsiftp<br />
save_config_file=null<br />
srmcphome=/opt/d-cache/srm<br />
urlcopy=/opt/d-cache/srm/sbin/url-copy.sh<br />
x509_user_cert=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_key=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_proxy=/tmp/x509up_u35225<br />
x509_user_trusted_certificates=/etc/grid-security/certificates<br />
globus_tcp_port_range=null<br />
gss_expected_name=null<br />
retry_num=20<br />
retry_timeout=10000<br />
wsdl_url=null<br />
use_urlcopy_script=true<br />
connect_to_wsdl=false<br />
delegate=true<br />
full_delegation=true<br />
from[0]=srm://castor.grid.sinica.edu.tw:8443/srm/managerv1?SFN=/castor/grid.sinica.edu.tw/grid/cms/heartbeat/test.transfer<br />
to=srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer4<br />
<br />
Tue Apr 04 15:16:48 BST 2006: starting SRMCopyPullClient<br />
Tue Apr 04 15:16:48 BST 2006: In SRMClient ExpectedName: host<br />
Tue Apr 04 15:16:48 BST 2006: SRMClient(https,srm/managerv1.wsdl,true)<br />
user credentials are: /C=UK/O=eScience/OU=Imperial/L=Physics/CN=stuart wakefield<br />
SRMClientV1 : connecting to srm at httpg://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1<br />
Tue Apr 04 15:16:49 BST 2006: connected to server, obtaining proxy<br />
Tue Apr 04 15:16:49 BST 2006: got proxy of type class org.dcache.srm.client.SRMClientV1<br />
Tue Apr 04 15:16:49 BST 2006: copying srm://castor.grid.sinica.edu.tw:8443/srm/managerv1?SFN=/castor/grid.sinica.edu.tw/grid/cms/heartbeat/test.transfer into srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer4<br />
SRMClientV1 : copy, srcSURLS[0]="srm://castor.grid.sinica.edu.tw:8443/srm/managerv1?SFN=/castor/grid.sinica.edu.tw/grid/cms/heartbeat/test.transfer"<br />
SRMClientV1 : copy, destSURLS[0]="srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer4"<br />
SRMClientV1 : copy, contacting service httpg://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1<br />
Tue Apr 04 15:16:51 BST 2006: srm returned requestId = -2147455655<br />
Tue Apr 04 15:16:51 BST 2006: sleeping 1 seconds ...<br />
Tue Apr 04 15:16:52 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 15:16:57 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 15:17:01 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 15:17:06 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 15:17:10 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 15:17:14 BST 2006: sleeping 7 seconds ...<br />
Tue Apr 04 15:17:22 BST 2006: sleeping 7 seconds ...<br />
Tue Apr 04 15:17:29 BST 2006: sleeping 7 seconds ...<br />
Tue Apr 04 15:17:36 BST 2006: sleeping 7 seconds ...<br />
Tue Apr 04 15:17:44 BST 2006: sleeping 7 seconds ...<br />
Tue Apr 04 15:17:51 BST 2006: sleeping 10 seconds ...<br />
Tue Apr 04 15:18:01 BST 2006: FileRequestStatus fileID = -2147455654 is Done => copying of srm://castor.grid.sinica.edu.tw:8443/srm/managerv1?SFN=/castor/grid.sinica.edu.tw/grid/cms/heartbeat/test.transfer is complete<br />
[aggarwa@gfe03 dcache]$ <br />
</pre></div>
Aggarwa
https://www.gridpp.ac.uk/wiki/CMS_transfer_log_for_TIER1_sites
CMS transfer log for TIER1 sites
2006-04-04T10:26:06Z
<p>Aggarwa: </p>
<hr />
<div>=== IC-HEP -> TI_RAL ===<br />
<br />
<pre><br />
[aggarwa@gfe03 dcache]$ srmcp srm://dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/TEST-FILES-GO-IN-HERE/test.transfer srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer<br />
Storage Resource Manager (SRM) CP Client version 1.20<br />
Copyright (c) 2002-2005 Fermi National Accelerator Laboratory<br />
<br />
SRM Configuration:<br />
debug=true<br />
gsissl=true<br />
help=false<br />
pushmode=false<br />
userproxy=true<br />
buffer_size=131072<br />
tcp_buffer_size=1048576<br />
stream_num=1<br />
config_file=/home/aggarwa/.srmconfig/config.xml<br />
glue_mapfile=/opt/d-cache/srm/conf/SRMServerV1.map<br />
webservice_path=srm/managerv1.wsdl<br />
webservice_protocol=https<br />
gsiftpclinet=globus-url-copy<br />
protocols_list=http,gsiftp<br />
save_config_file=null<br />
srmcphome=/opt/d-cache/srm<br />
urlcopy=/opt/d-cache/srm/sbin/url-copy.sh<br />
x509_user_cert=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_key=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_proxy=/tmp/x509up_u35225<br />
x509_user_trusted_certificates=/etc/grid-security/certificates<br />
globus_tcp_port_range=null<br />
gss_expected_name=null<br />
retry_num=20<br />
retry_timeout=10000<br />
wsdl_url=null<br />
use_urlcopy_script=true<br />
connect_to_wsdl=false<br />
delegate=true<br />
full_delegation=true<br />
from[0]=srm://dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/TEST-FILES-GO-IN-HERE/test.transfer<br />
to=srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer<br />
<br />
Tue Apr 04 10:53:33 BST 2006: starting SRMCopyPullClient<br />
Tue Apr 04 10:53:33 BST 2006: In SRMClient ExpectedName: host<br />
Tue Apr 04 10:53:33 BST 2006: SRMClient(https,srm/managerv1.wsdl,true)<br />
user credentials are: /C=UK/O=eScience/OU=Imperial/L=Physics/CN=stuart wakefield<br />
SRMClientV1 : connecting to srm at httpg://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1<br />
Tue Apr 04 10:53:35 BST 2006: connected to server, obtaining proxy<br />
Tue Apr 04 10:53:35 BST 2006: got proxy of type class org.dcache.srm.client.SRMClientV1<br />
Tue Apr 04 10:53:35 BST 2006: copying srm://dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/TEST-FILES-GO-IN-HERE/test.transfer into srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer<br />
SRMClientV1 : copy, srcSURLS[0]="srm://dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/TEST-FILES-GO-IN-HERE/test.transfer"<br />
SRMClientV1 : copy, destSURLS[0]="srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer"<br />
SRMClientV1 : copy, contacting service httpg://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1<br />
Tue Apr 04 10:53:39 BST 2006: srm returned requestId = -2147455707<br />
Tue Apr 04 10:53:39 BST 2006: sleeping 1 seconds ...<br />
Tue Apr 04 10:53:40 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 10:53:45 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 10:53:49 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 10:53:54 BST 2006: sleeping 4 seconds ...<br />
Tue Apr 04 10:53:58 BST 2006: FileRequestStatus fileID = -2147455706 is Done => copying of srm://dcache.gridpp.rl.ac.uk:8443/srm/managerv1?SFN=/pnfs/gridpp.rl.ac.uk/data/cms/TEST-FILES-GO-IN-HERE/test.transfer is complete<br />
[aggarwa@gfe03 dcache]$<br />
</pre><br />
<br />
=== IC-HEP -> TI_CERN ===<br />
<br />
<pre><br />
<br />
[aggarwa@gfe03 dcache]$ srmcp srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer srm:/<br />
/castor.grid.sinica.edu.tw:8443/srm/managerv1?SFN=/castor/grid.sinica.edu.tw/grid/cms/heartbeat/heartbeat.T2_LT2_IC_HEP.1144139325.6204<br />
Storage Resource Manager (SRM) CP Client version 1.20<br />
Copyright (c) 2002-2005 Fermi National Accelerator Laboratory<br />
<br />
SRM Configuration:<br />
debug=true<br />
gsissl=true<br />
help=false<br />
pushmode=false<br />
userproxy=true<br />
buffer_size=131072<br />
tcp_buffer_size=1048576<br />
stream_num=1<br />
config_file=/home/aggarwa/.srmconfig/config.xml<br />
glue_mapfile=/opt/d-cache/srm/conf/SRMServerV1.map<br />
webservice_path=srm/managerv1.wsdl<br />
webservice_protocol=https<br />
gsiftpclinet=globus-url-copy<br />
protocols_list=http,gsiftp<br />
save_config_file=null<br />
srmcphome=/opt/d-cache/srm<br />
urlcopy=/opt/d-cache/srm/sbin/url-copy.sh<br />
x509_user_cert=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_key=/home/aggarwa/k5-ca-proxy.pem<br />
x509_user_proxy=/tmp/x509up_u35225<br />
x509_user_trusted_certificates=/etc/grid-security/certificates<br />
globus_tcp_port_range=null<br />
gss_expected_name=null<br />
retry_num=20<br />
retry_timeout=10000<br />
wsdl_url=null<br />
use_urlcopy_script=true<br />
connect_to_wsdl=false<br />
delegate=true<br />
full_delegation=true<br />
from[0]=srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer<br />
to=srm://castor.grid.sinica.edu.tw:8443/srm/managerv1?SFN=/castor/grid.sinica.edu.tw/grid/cms/heartbeat/heartbeat.T2_LT2_IC_HEP.1144139325.6204<br />
<br />
Tue Apr 04 11:08:40 BST 2006: starting SRMCopyPullClient<br />
Tue Apr 04 11:08:40 BST 2006: In SRMClient ExpectedName: host<br />
Tue Apr 04 11:08:40 BST 2006: SRMClient(https,srm/managerv1.wsdl,true)<br />
user credentials are: /C=UK/O=eScience/OU=Imperial/L=Physics/CN=stuart wakefield<br />
SRMClientV1 : connecting to srm at httpg://castor.grid.sinica.edu.tw:8443/srm/managerv1<br />
Tue Apr 04 11:08:43 BST 2006: connected to server, obtaining proxy<br />
Tue Apr 04 11:08:43 BST 2006: got proxy of type class org.dcache.srm.client.SRMClientV1<br />
Tue Apr 04 11:08:43 BST 2006: copying srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer into srm://castor.grid.sinica.edu.tw:8443/srm/managerv1?SFN=/castor/grid.sinica.edu.tw/grid/cms/heartbeat/heartbeat.T2_LT2_IC_HEP.1144139325.6204<br />
SRMClientV1 : copy, srcSURLS[0]="srm://gfe02.hep.ph.ic.ac.uk:8443/srm/managerv1?SFN=/pnfs/hep.ph.ic.ac.uk/data/cms/heartbeat/test.transfer"<br />
SRMClientV1 : copy, destSURLS[0]="srm://castor.grid.sinica.edu.tw:8443/srm/managerv1?SFN=/castor/grid.sinica.edu.tw/grid/cms/heartbeat/heartbeat.T2_LT2_IC_HEP.1144139325.6204"<br />
SRMClientV1 : copy, contacting service httpg://castor.grid.sinica.edu.tw:8443/srm/managerv1<br />
Tue Apr 04 11:08:53 BST 2006: srm returned requestId = 712443126<br />
Tue Apr 04 11:08:53 BST 2006: sleeping 6 seconds ...<br />
Tue Apr 04 11:09:02 BST 2006: sleeping 31 seconds ...<br />
Tue Apr 04 11:09:35 BST 2006: rfs.state is failed calling setFileStatus(712443126,0,"Done")<br />
Exception in thread "main" java.io.IOException: Request with requestId =712443126 rs.state = Failed rs.error = null<br />
at gov.fnal.srm.util.SRMCopyPullClient.start(SRMCopyPullClient.java:276)<br />
at gov.fnal.srm.util.SRMCopy.work(SRMCopy.java:450)<br />
at gov.fnal.srm.util.SRMCopy.main(SRMCopy.java:248)<br />
Tue Apr 04 11:09:37 BST 2006: setting all remaining file statuses of request requestId=712443126 to "Done"<br />
Tue Apr 04 11:09:37 BST 2006: setting file request 0 status to Done<br />
Tue Apr 04 11:09:39 BST 2006: set all file statuses to "Done"<br />
[aggarwa@gfe03 dcache]$ <br />
<br />
</pre></div>
Aggarwa
https://www.gridpp.ac.uk/wiki/LT2_Testzone
LT2 Testzone
2006-01-18T15:26:33Z
<p>Aggarwa: </p>
<hr />
<div>== RB/BDII update ==<br />
<br />
We are using RHEL.<br />
<br />
in /etc/sysconfig/rhn/sources we have added the new repository <br />
<pre><br />
yum lcg2 http://grid-deployment.web.cern.ch/grid-deployment/apt-cert/HEAD/sl3/en/i386/RPMS.lcg_sl3/<br />
yum lcg2_udpate http://grid-deployment.web.cern.ch/grid-deployment/apt-cert/HEAD/sl3/en/i386/RPMS.lcg_sl3.updates/<br />
</pre><br />
<br />
Then we ran<br />
<pre><br />
up2date-nox -u -nosig lcg-RB<br />
</pre><br />
to update all the rpm<br />
<br />
Changed the site-info.def with the important part shown below<br />
<br />
<pre><br />
MY_DOMAIN=hep.ph.ic.ac.uk<br />
<br />
CE_HOST=gw39.$MY_DOMAIN<br />
# note: SE_HOST removed --> see CLASSIC_HOST, DCACHE_ADMIN, DPM_HOST below<br />
RB_HOST=gm02.$MY_DOMAIN<br />
PX_HOST=my-px.$MY_DOMAIN<br />
BDII_HOST=gm02.$MY_DOMAIN<br />
MON_HOST=gw25.$MY_DOMAIN<br />
LFC_HOST=gm01.$MY_DOMAIN<br />
REG_HOST=lcgic01.gridpp.rl.ac.uk # there is only 1 central registry for now<br />
<br />
# Set this if you are building a VO-BOX <br />
VOBOX_HOST=my-vobox.$MY_DOMAIN<br />
VOBOX_PORT=1975<br />
<br />
#Set this to "yes" your site provides an X509toKERBEROS Authentication Server <br />
#Only for sites with Experiment Software Area under AFS <br />
GSSKLOG=no<br />
GSSKLOG_SERVER=my-gssklog.$MY_DOMAIN<br />
<br />
## Removed the LFC <br />
# Change this if your torque server is not on the CE<br />
# it's ingored for other batch systems<br />
TORQUE_SERVER=$CE_HOST<br />
<br />
WN_LIST=/opt/lcg/yaim/config/wn-list.conf<br />
USERS_CONF=/opt/lcg/yaim/config/users.conf<br />
GROUPS_CONF=/opt/lcg/yaim/config/groups.conf<br />
FUNCTIONS_DIR=/opt/lcg/yaim/functions<br />
<br />
## We do not car about those because we are using up2date where we have given the correct path<br />
# Pick the apt-get sources appropriate to your OS - uncomment one line<br />
LCG_REPOSITORY="'rpm http://linuxsoft.cern.ch LCG/apt/LCG-2_7_0/sl3/en/i386 lcg_sl3 lcg_sl3.updates lcg_sl3.security' 'rpm http://grid-deployment.web.cern.ch/grid-deployment/gis apt/LCG-2_7_0/sl3/en/i386 lcg_sl3 lcg_sl3.updates lcg_sl3.security'"<br />
<br />
CA_REPOSITORY="rpm http://grid-deployment.web.cern.ch/grid-deployment/gis apt/LCG_CA/en/i386 lcg"<br />
<br />
# For the relocatable (tarball) distribution, ensure<br />
# that INSTALL_ROOT is set correctly<br />
INSTALL_ROOT=/opt<br />
<br />
# You will probably want to change these too for the relocatable dist<br />
OUTPUT_STORAGE=/tmp/jobOutput<br />
JAVA_LOCATION="/usr/java/j2sdk1.4.2_08"<br />
<br />
# Set this to '/dev/null' or some other dir if you want<br />
# to turn off yaim installation of cron jobs<br />
CRON_DIR=/etc/cron.d<br />
<br />
GLOBUS_TCP_PORT_RANGE="20000 25000"<br />
<br />
MYSQL_PASSWORD=not shown<br />
<br />
APEL_DB_PASSWORD="not shown"<br />
<br />
#<br />
# ---> GRID_TRUSTED_BROKERS: put single quotes around each trusted DN !!! <---<br />
#<br />
GRID_TRUSTED_BROKERS="one"<br />
#GRID_TRUSTED_BROKERS="'broker one''broker two'"<br />
# The RB now uses the DLI by default; set VOs here which should use RLS<br />
RB_RLS="atlas cms"<br />
<br />
GRIDMAP_AUTH="ldap://lcg-registrar.cern.ch/ou=users,o=registrar,dc=lcg,dc=org"<br />
#GRIDMAP_AUTH="ldap://lcg-registrar.cern.ch/ou=users,o=registrar,dc=lcg,dc=org ldap://xyz"<br />
<br />
GRIDICE_SERVER_HOST=$MON_HOST<br />
<br />
SITE_EMAIL=lcg-site-admin@imperial.ac.uk<br />
SITE_NAME=UKI-LT2-IC-HEP<br />
SITE_LOC="London, UK"<br />
SITE_LAT=51.49945300341785<br />
SITE_LONG=-0.17897844314575195<br />
SITE_WEB="http://www.my-site.org"<br />
SITE_TIER="TIER 2"<br />
SITE_SUPPORT_SITE="RAL-LCG2"<br />
<br />
## Removed the CE part<br />
<br />
CLASSIC_HOST=gw38.hep.ph.ic.ac.uk<br />
CLASSIC_STORAGE_DIR="/stage2/lcg2-data"<br />
<br />
# dCache-specific settings<br />
# ignore if you are not running d-cache<br />
<br />
## Removed the dCache part<br />
<br />
## removed the DPM part<br />
<br />
FTS_SERVER_URL="https://fts0344.gridpp.rl.ac.uk:8443/sc3ral/glite-data-transfer-fts"<br />
BDII_HTTP_URL="http://www.gridpp.ac.uk/deployment/testzone/uk_testzone_sites.txt"<br />
<br />
#BDII_HTTP_URL="http://grid-deployment.web.cern.ch/grid-deployment/gis/lcg2-bdii/dteam/lcg2-all-sites.conf"<br />
# Set this to use FCR<br />
BDII_FCR="https://goc.grid-support.ac.uk/gridsite/bdii/BDII/www/bdii-update.ldif"<br />
BDII_REGIONS="RB" # list of the services provided by the site<br />
BDII_CE_URL="ldap://$CE_HOST:2135/mds-vo-name=local,o=grid"<br />
BDII_SE_URL="ldap://$CLASSIC_HOST:2135/mds-vo-name=local,o=grid"<br />
BDII_RB_URL="ldap://$RB_HOST:2135/mds-vo-name=local,o=grid"<br />
BDII_PX_URL="ldap://$PX_HOST:2135/mds-vo-name=local,o=grid"<br />
BDII_LFC_URL="ldap://$LFC_HOST:2135/mds-vo-name=local,o=grid"<br />
BDII_VOBOX_URL="ldap://$VOBOX_HOST:2135/mds-vo-name=local,o=grid"<br />
<br />
VOS="dteam" # add the other VOs your site supports<br />
QUEUES=${VOS}<br />
<br />
VO_SW_DIR=/opt/exp_soft<br />
<br />
# set this if you want a scratch directory for jobs<br />
EDG_WL_SCRATCH=""<br />
<br />
VO_DTEAM_SW_DIR=$VO_SW_DIR/dteam<br />
VO_DTEAM_DEFAULT_SE=$CLASSIC_HOST<br />
VO_DTEAM_STORAGE_DIR=$CLASSIC_STORAGE_DIR/dteam<br />
VO_DTEAM_QUEUES="dteam"<br />
<br />
VO_DTEAM_SGM=ldap://lcg-vo.cern.ch/ou=lcgadmin,o=dteam,dc=lcg,dc=org<br />
VO_DTEAM_USERS=ldap://lcg-vo.cern.ch/ou=lcg1,o=dteam,dc=lcg,dc=org<br />
VO_DTEAM_VOMS_SERVERS="'vomss://lcg-voms.cern.ch:8443/voms/dteam?/dteam/' 'vomss://voms.cern.ch:8443/voms/dteam?/dteam/'"<br />
VO_DTEAM_VOMSES="'dteam lcg-voms.cern.ch 15004 /C=CH/O=CERN/OU=GRID/CN=host/lcg-voms.cern.ch dteam' 'dteam voms.cern.ch 15004 /C=CH/O=CERN/OU=GRID/CN=host/voms.cern.ch dteam'"<br />
<br />
VO_MIS_QUEUES="mis"<br />
VO_MIS_VOMS_SERVERS=""<br />
</pre><br />
<br />
Then we reran: <br />
<pre><br />
/opt/lcg/yaim/scripts/configure_node site-info-rb-2.7.0.def RB<br />
/opt/lcg/yaim/scripts/configure_node site-info-rb-2.7.0.def BDII<br />
</pre><br />
<br />
<br />
[[Category:UKI Testzone]] [[Category:LCG 2.7 Testing]]</div>
Aggarwa
https://www.gridpp.ac.uk/wiki/IC-HEP_(10_Questions)
IC-HEP (10 Questions)
2005-12-02T13:37:54Z
<p>Aggarwa: </p>
<hr />
<div>''''Status: DRAFT'''' <br />
<br />
== Question 1 == <br />
<br />
''Provide the name and contact details of your local (Departmental) and Institutional network support staff.''<br />
<br />
* IC Departmental network support contact is: Kostas Georgiou, Systems Manager, Department of High Energy Physics, email: k.georgious@imperial.ac.uk<br />
* IC Institutional network support contact is: Phil Mayers et al, Information and Communication Technologies (ICT), email: servicedesk@imperial.ac.uk<br />
<br />
== Question 2 ==<br />
<br />
''Provide details of the responsibilities, together with the demarcation of those responsibilities, of your local and Institutional network support staff.''<br />
<br />
* The departmental contact is responsible for: <br />
<br />
* The institutional contact is responsible for: <br />
<br />
Services include; a high quality physical infrastructure (networks, servers, computer clusters, etc.); support of individual desktops, teaching and research clusters; College wide services such as e-mail and central file storage and key College management information systems.<br />
<br />
== Question 3 ==<br />
<br />
''What is a Regional Network Operator (RNO), and why does this matter to you?''<br />
<br />
* An RNO is: London MAN (LMN)<br />
* I care because: they provide our connection to the rest of the internet.<br />
<br />
== Question 4 ==<br />
<br />
''What is SuperJANET4? And more importantly what is SuperJANET5?''<br />
<br />
* SuperJANET4 is: UK academic network backbone<br />
* SuperJANET5 is: proposed upgrade<br />
<br />
== Questions 5, 6, 7 and 9 (part) ==<br />
<br />
5: ''Draw a simple diagram showing your local (Departmental) network and sufficient of your Institutional network such that you can trace a line from your end-system to the connection from your Institutes network into the RNO infrastructure.''<br />
<br />
6: ''On the diagram produced in answer to Question 5, show the capacity of each link in the network and provide a note against each link of its contention ratio.''<br />
<br />
7: ''On the diagram produced in answer to Question 5, colour and distinguish the switches and routers and for each device provide a note of its backplane capability.''<br />
<br />
9.x: ''On the diagram produced in answer to Question 5 colour in the firewall(s) (or other security devices).''<br />
<br />
(upload an image via http://wiki.gridpp.ac.uk/wiki/Special:Upload)<br />
<br />
<br />
== Question 8 ==<br />
<br />
''What is the average and peak traffic flow between your local (Departmental) network and the Institutional network?''<br />
<br />
* Average traffic: measured over a last few days was < 1 Mbit/s<br />
* Peak traffic: monitoring last few days shows ~ 10 Mbits/s<br />
<br />
''What is the average and peak traffic flow between your Institutional network and the RNO?''<br />
<br />
* Average traffic: <br />
* Peak traffic: <br />
<br />
''What is the total capacity of your Institutional connection to the RNO?''<br />
<br />
* Our total capacity is: 2 Gbit/s<br />
<br />
''What are the upgrade plans for your local (Departmental) network; your Institutional network and the network run by the RNO?''<br />
<br />
* Departmental plans: None<br />
* Institutional plans: plans to upgrade to a 10 Gbit connection within Imperial<br />
* RNO plans: Not known<br />
<br />
== Question 9 ==<br />
<br />
''Do you believe in IS Security? Does your Institute believe in IS Security?''<br />
<br />
* I'm a believer: YES<br />
* We're collective believers: YES<br />
<br />
''Do you believe in firewalls? Does your Institute believe in firewalls?''<br />
<br />
* I'm a believer: YES<br />
* We're collective believers: YES<br />
<br />
''Provide information of how changes are made to the rule set of the firewall.''<br />
<br />
* Firewall rules are changed by: ICT <br />
<br />
''Provide a note of the capacity of this device and what happens when that capacity is exceeded.''<br />
<br />
* The capacity is: 500 Mbits/s<br />
* When it goes over-capacity, the following happens: <br />
<br />
== Question 10 ==<br />
<br />
''What is the best performance you can achieve from your end-system to an equivalent system located in some geographically remote (and friendly!) Institute?''<br />
<br />
* Best performance is: ~500 Mbits/s <br />
<br />
For your end-system: <br />
<br />
: ''Do you understand the kernel, the bus structure; the NIC; and the disk system?''<br />
<br />
* I understand: YES<br />
<br />
: Do you understand TCP tuning and what it can do for you? <br />
<br />
* I understand: YES need to know more about it)<br />
<br />
: Do you understand your application and what it can do to your performance? <br />
<br />
* I understand: YES (need to know more about it)<br />
<br />
<br />
[[Category:Network]]<br />
[[Category:London Tier2]]</div>
Aggarwa