Difference between revisions of "Tier1 Operations Report 2017-06-07"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(21 intermediate revisions by one user not shown)
Line 8: Line 8:
 
{| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;"
 
{| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;"
 
|-
 
|-
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 24th to 31st May 2017.
+
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 31st May to 7th June 2017.
 
|}
 
|}
* A problem of ARP poisoning on the Tier1 network is affecting some monitoring used by Echo. The cause of this is now believed to be understood and the fix (a setting change in the OPN Router) has been applied.
+
* Following the upgrade of the Castor GEN instance last Wednesday (31st) there was a problem with the 'OPS' tests against the instance that was fixed by a configuration correction the following day. No other problems were encountered after the upgrade.
* There is a problem on the site firewall which is causing problems for some specific data flows. This was being investigated in connection with videoconferencing problems. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes).
+
* There was a transitory problem with the CMS Castor instance for a couple of hours in the early morning of Friday (2nd June) caused by blocking sessions in the database.
* Atlas carried out a large deletion test of files in Echo last week. Overall the results were pleasing with the bulk of the files deleted successfully and in a reasonable timescale. However, some (one to two thousand) files failed to delete. Most of these were cleaned up manually leaving a handful for debugging purposes. However, the system subsequently also deleted these remaining files without manual intervention.
+
* We are seeing a high rate of reported disk problems on the OCF '14 batch of disk servers. In some of the cases the vendor finds no fault in the drives that have been removed. We plan to update the RAID card firmware in these systems following testing of the latest version.
 +
* Last week we reported a problem of ARP poisoning on the Tier1 network that affected some monitoring used by Echo. The fix (a change to a setting on the OPN router) made last Wednesday appears to have resolved this.
 +
* Overall CPU efficiencies for May was 77.3%, compared with 67.0% in April. Notably:
 +
** ATLAS has started to improve recently, in fact it's been well above 90% for the past week.
 +
** The LHCb efficiency dropped significantly on 11th May (day of the LHCb CASTOR upgrade) but in the past few days it has risen to above 90%. This may be due the installation of an xroot manager on the (LHCb) stager as reported last week. This resolved a problem that affected LHCb where a TURL returned by the SRM did not always work when used for xroot access.
 +
* Echo (CEPH) has seen an internal problem where the "Level DB"s on some OSDs have grown very large causing latency problems. The cause is largely understood and worked around by the introduction of the XRoot proxies. Progress has also been made in reducing the sizes of these DBs.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 24: Line 29:
 
|}
 
|}
 
* GDSS658 (AtlasScratchDisk - D1T0) crashed last Tuesday (30th May). It was returned to service on Thursday (1st Jue).  Two disks were replaced as the testing flushed these out as problematic.
 
* GDSS658 (AtlasScratchDisk - D1T0) crashed last Tuesday (30th May). It was returned to service on Thursday (1st Jue).  Two disks were replaced as the testing flushed these out as problematic.
 +
* GDSS793 (AtlasDataDisk - D1T0) was taken out of service on Monday (5th June) yesterday as two disk drives in the server were faulty. Following replacement and rebuilding of the first it was returned to service yesterday (6th June), initially read-only.
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  
Line 34: Line 40:
 
|}
 
|}
 
* We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
 
* We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
* LHCb Castor performance has been OK since the 2.1.16 update although this was not under and high load. A load test (mimicking the stripping/merging campaign) was carried out last Wednesday (24th May). This was successful in that the specific performance/timeout problem seen before the Castor 2.1.16 upgrade was not seen. The main limitation encountered within Castor was load on the older disk servers in the instance. Following the 2.1.16 upgrade and this (successful) test this item is being removed from the list of ongoing issues.
+
* There is a problem on the site firewall which is causing problems for some specific data flows. This was being investigated in connection with videoconferencing problems. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes).
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- *************************************************************** ----->
 
<!-- *************************************************************** ----->
Line 45: Line 51:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
|}
 
|}
* GDSS793 (AtlasDataDisk - D1T0) was taken out of service yesterday as two disk drives in the server were faulty. Work is ongoing on this srever.
+
* None
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 67: Line 73:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* CMS Castor and GEN instances updated to Castor version 2.1.16-13 and the associated SRMs also upgraded to version 2.1.16.
+
* All CEs now migrated to use the load balancers in front of the argus service.
* For LHCb Castor the xroot manager was installed on the (LHCb) stager. This resolves a problem introduced by version 2.1.16 (and reported last week) that affected LHCb where a TURL returned by the SRM did not always work when used for xroot access owing to an incorrect hostname.
+
* A start has been made  enabling XRootD gatweays on worker nodes for Echo access. This will be ramped up to one batch of worker nodes.
* CEs being migrated to use the load balancers in front of the argus service.
+
* Batch access has been enabled for LIGO and the MICE pilot role.
+
* Parameter change in the OPN Router to stop it providing proxy ARP information.
+
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 98: Line 101:
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
'''Pending - but not yet formally announced:'''
 
'''Pending - but not yet formally announced:'''
* Increase OPN link to CERN from 2*10Gbit to 3*10Gbit links.
+
* Increase OPN link to CERN from 2*10Gbit to 3*10Gbit links (planned for 14th June).
 +
* Firmware updates in OCF 14 disk servers.
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Castor:  
 
* Castor:  
Line 107: Line 111:
 
** Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
 
** Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
 
* Services
 
* Services
** Put argus systems behind a load balancers to improve resilience.
 
 
** The production FTS needs updating. This will no longer support the soap interface. (The "test" FTS , used by Atlas, has already been upgraded.)
 
** The production FTS needs updating. This will no longer support the soap interface. (The "test" FTS , used by Atlas, has already been upgraded.)
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ***************End Advanced warning for other interventions*************** ----->
Line 147: Line 150:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Open GGUS Tickets (Snapshot during morning of meeting)
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Open GGUS Tickets (Snapshot during morning of meeting)
 
|}
 
|}
* GGUS system currently not available. No report on open GGUS tickets. (GGUS service was back at the end of the morning).
+
{|border="1" cellpadding="1",center;
 +
|+
 +
|-style="background:#b7f1ce"
 +
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 +
|-
 +
| 128830
 +
| Green
 +
| Less Urgent
 +
| In Progress
 +
| 2017-06-07
 +
| 2017-06-07
 +
|
 +
| Jobs failing at RAL due errors with gfal2
 +
|-
 +
| 127612
 +
| Red
 +
| Alarm
 +
| In Progress
 +
| 2017-04-08
 +
| 2017-05-19
 +
| LHCb
 +
| CEs at RAL not responding
 +
|-
 +
| 127597
 +
| Amber
 +
| Urgent
 +
| In Progress
 +
| 2017-04-07
 +
| 2017-05-16
 +
| CMS
 +
| Check networking and xrootd RAL-CERN performance
 +
|-
 +
| 127240
 +
| Red
 +
| Urgent
 +
| In Progress
 +
| 2017-03-21
 +
| 2017-05-15
 +
| CMS
 +
| Staging Test at UK_RAL for Run2
 +
|-
 +
| 124876
 +
| Red
 +
| Less Urgent
 +
| On Hold
 +
| 2016-11-07
 +
| 2017-01-01
 +
| OPS
 +
| [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
 +
|-
 +
| 117683
 +
| Red
 +
| Less Urgent
 +
| On Hold
 +
| 2015-11-18
 +
| 2017-05-10
 +
|
 +
| CASTOR at RAL not publishing GLUE 2.
 +
|}
 
<!-- **********************End GGUS Tickets************************** ----->
 
<!-- **********************End GGUS Tickets************************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 188: Line 249:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
|}
 
|}
* None yet
+
* There will most probably NOT be a meeting in the next two weeks (Clashes with HEP Sysman and the WLCG Wokshop). However, a report will be produced and comments invited.
 +
* Discussion around date for upgrading the 'production' FTS3 service which will terminate the SOAP interface to FTS3. Possible date is 7th July 2017.
 +
* MICE have stopped data taking now. Next data-taking in September. They are ready for us to upgrade FT3 and drop the SOAP interface.

Latest revision as of 13:11, 7 June 2017

RAL Tier1 Operations Report for 7th June 2017

Review of Issues during the week 31st May to 7th June 2017.
  • Following the upgrade of the Castor GEN instance last Wednesday (31st) there was a problem with the 'OPS' tests against the instance that was fixed by a configuration correction the following day. No other problems were encountered after the upgrade.
  • There was a transitory problem with the CMS Castor instance for a couple of hours in the early morning of Friday (2nd June) caused by blocking sessions in the database.
  • We are seeing a high rate of reported disk problems on the OCF '14 batch of disk servers. In some of the cases the vendor finds no fault in the drives that have been removed. We plan to update the RAID card firmware in these systems following testing of the latest version.
  • Last week we reported a problem of ARP poisoning on the Tier1 network that affected some monitoring used by Echo. The fix (a change to a setting on the OPN router) made last Wednesday appears to have resolved this.
  • Overall CPU efficiencies for May was 77.3%, compared with 67.0% in April. Notably:
    • ATLAS has started to improve recently, in fact it's been well above 90% for the past week.
    • The LHCb efficiency dropped significantly on 11th May (day of the LHCb CASTOR upgrade) but in the past few days it has risen to above 90%. This may be due the installation of an xroot manager on the (LHCb) stager as reported last week. This resolved a problem that affected LHCb where a TURL returned by the SRM did not always work when used for xroot access.
  • Echo (CEPH) has seen an internal problem where the "Level DB"s on some OSDs have grown very large causing latency problems. The cause is largely understood and worked around by the introduction of the XRoot proxies. Progress has also been made in reducing the sizes of these DBs.
Resolved Disk Server Issues
  • GDSS658 (AtlasScratchDisk - D1T0) crashed last Tuesday (30th May). It was returned to service on Thursday (1st Jue). Two disks were replaced as the testing flushed these out as problematic.
  • GDSS793 (AtlasDataDisk - D1T0) was taken out of service on Monday (5th June) yesterday as two disk drives in the server were faulty. Following replacement and rebuilding of the first it was returned to service yesterday (6th June), initially read-only.
Current operational status and issues
  • We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
  • There is a problem on the site firewall which is causing problems for some specific data flows. This was being investigated in connection with videoconferencing problems. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Ongoing Disk Server Issues
  • None
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • All CEs now migrated to use the load balancers in front of the argus service.
  • A start has been made enabling XRootD gatweays on worker nodes for Echo access. This will be ramped up to one batch of worker nodes.
Declared in the GOC DB
  • None
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Increase OPN link to CERN from 2*10Gbit to 3*10Gbit links (planned for 14th June).
  • Firmware updates in OCF 14 disk servers.

Listing by category:

  • Castor:
    • Move to generic Castor headnodes.
    • Merge AtlasScratchDisk into larger Atlas disk pool.
  • Networking
    • Increase OPN link to CERN from 2*10Gbit to 3*10Gbit links.
    • Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
  • Services
    • The production FTS needs updating. This will no longer support the soap interface. (The "test" FTS , used by Atlas, has already been upgraded.)
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
Castor GEN instace (srm-alice, srm-biomed, srm-dteam, srm-ilc, srm-mice, srm-na62, srm-pheno, srm-snoplus, srm-t2k) SCHEDULED OUTAGE 31/05/2017 10:00 31/05/2017 12:56 5 hours Upgrade of Castor GEN instance to version 2.1.16.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
128830 Green Less Urgent In Progress 2017-06-07 2017-06-07 Jobs failing at RAL due errors with gfal2
127612 Red Alarm In Progress 2017-04-08 2017-05-19 LHCb CEs at RAL not responding
127597 Amber Urgent In Progress 2017-04-07 2017-05-16 CMS Check networking and xrootd RAL-CERN performance
127240 Red Urgent In Progress 2017-03-21 2017-05-15 CMS Staging Test at UK_RAL for Run2
124876 Red Less Urgent On Hold 2016-11-07 2017-01-01 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2017-05-10 CASTOR at RAL not publishing GLUE 2.
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas Echo Atlas HC Atlas HC Echo CMS HC Comment
31/05/17 100 100 100 89 100 100 100 100 93 CMS: Some ‘User timeout over’ on the SRM tests. There were a few CE test failures too, with ‘Job held’ and ‘unable to open remote xrootd ..’ errors.
01/06/17 100 100 100 97 100 100 100 100 N/A CMS: Some SRM failures with an error of ‘ERROR: DESTINATION SRM_PUT_TURL error on the turl request : [SE][StatusOfPutRequest][SRM_FAILURE] Unable to issue PrepareToPut request to Castor’.
02/06/17 100 100 100 92 100 100 100 100 100 CMS: Some instances of “Unable to issue PrepareToPut request to Castor”
03/06/17 100 100 100 90 100 100 100 98 N/A CMS: Some user timeout errors on SRM tests.
04/06/17 100 100 100 97 100 100 100 100 N/A CMS: Some user timeout errors on SRM tests.
05/06/17 100 100 100 94 100 100 100 100 N/A CMS: SRM test failures. “User timeout” – some on Put, some on Get.
06/06/17 100 100 100 92 100 100 100 100 99 CMS: SRM test failures. “User timeout” – some on Put, some on Get.
Notes from Meeting.
  • There will most probably NOT be a meeting in the next two weeks (Clashes with HEP Sysman and the WLCG Wokshop). However, a report will be produced and comments invited.
  • Discussion around date for upgrading the 'production' FTS3 service which will terminate the SOAP interface to FTS3. Possible date is 7th July 2017.
  • MICE have stopped data taking now. Next data-taking in September. They are ready for us to upgrade FT3 and drop the SOAP interface.