Difference between revisions of "Tier1 Operations Report 2017-03-29"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(17 intermediate revisions by one user not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 22nd to 29th March 2017.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 22nd to 29th March 2017.
 
|}
 
|}
* There was a problem with the Atlas Castor instance on the evening of Wednesday 15th Mar. The oncall was contacted and Castor services restarted to resolve the problem. The cause was a known bug that causes exhaustion of a particular database resource.
+
* The Upgrade of the LHCb SRMs last Thursday (23rd March) went well. However, there were problems upgrading the Atlas ones yesterday (Tuesday 28th). This problem is now understood and the upgrade is continuing today.
* A crash of one of the five hypervisors in the Microsoft Hyper-V high availability cluster caused a number of VMs to reboot overnight Thursday-Friday (16-17 Mar). A knock-on effect was that one of the argus servers did not start cleanly and this affected CMS glexec tests.
+
* Some batch job submission errors have been seen by CMS and LHCb. These are not yet understood.
* We have an ongoing problem with the SRM SAM tests for Atlas which are failing a lot of the time. We have confirmed this is not affecting Atlas operationally it is just the tests that fails. We still have a GGUS ticket open with Atlas as the test appears to be problematic.  
+
* Over the weekend there were two instances where a number of Tier1 systems lost connections between themselves. This was transitory and connection re-established themselves. The cause of this is not yet understood.
* There has been a large backlog of Atlas transfers for RAL queued in the FTS. The high number is because Atlas are doing reprocessing and pulling data back from tape. This high request rate is seen at other Tier1s but we are coping worse than other sites.
+
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 24: Line 23:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* None
+
* GDSS647 (LHCbDst - D1T0) crashed in the early hours of Tuesday 28th March. The RAID card had become confused following the failure of a disk drive. It was being drained and was returned to service this morning (29th Mar).
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  
Line 45: Line 44:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
|}
 
|}
* None
+
* GDSS780 (LHCbDst - D1T0) had failed on Sunday (26th) and was put back in service the following day. However, it crashed again this morning and is currently under investigation.
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 68: Line 67:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting.
 
|}
 
|}
* Last week nine of the ’14 generation disk servers – each 100TB - were deployed into AtlasDataDisk. (These are from the batch that was used as  CEPH test servers).  
+
* SRM Upgrades (to version version 2.1.16-10 and the OS to SL6) were carried out for LHCb last Thursday (23rd March) and are ongoing for Atlas. CMS and 'GEN' instances planned for tomorrow.
* Work ongoing on replacing two of the chillers.
+
* Work ongoing on replacing two of the chillers. The first replacement has been commissioned. The second one is due to be shipped out tomorrow (Thursday 30th March).
 +
* This morning (Wed 29th March) the web alias www.gridpp.rl.ac.uk was flipped to point to a new web server.
 +
* Out of Hours cover for the CEPH ECHO service is being piloted.
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 90: Line 91:
 
! Reason
 
! Reason
 
|-
 
|-
| srm-lhcb.gridpp.rl.ac.uk
+
| Castor CMS and GEN instances (SRMs)
 
| SCHEDULED
 
| SCHEDULED
| OUTAGE
+
| WARNING
| 23/03/2017 11:00
+
| 30/03/2017 10:00
| 23/03/2017 17:00
+
| 30/03/2017 16:30
| 6 hours
+
| 6 hours and 30 minutes
| Upgrade of Castor SRMs for LHCb to version 2.1.16-10
+
|Upgrading SRM software to 2.1.16
 +
|-
 +
| srm-atlas.gridpp.rl.ac.uk,
 +
| UNSCHEDULED
 +
| WARNING
 +
| 29/03/2017 09:00
 +
| 29/03/2017 16:30
 +
| 7 hours and 30 minutes
 +
| upgrading Atlas srm software to 2.1.16
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
Line 112: Line 121:
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
'''Pending - but not yet formally announced:'''
 
'''Pending - but not yet formally announced:'''
* Update Castor SRMs starting with LHCb. (Announced for 23rd March.)
+
* Update Castor SRMs - ongoing.
 
* Chiller replacement - work ongoing.
 
* Chiller replacement - work ongoing.
 
* Merge AtlasScratchDisk into larger Atlas disk pool.
 
* Merge AtlasScratchDisk into larger Atlas disk pool.
Line 119: Line 128:
 
** Update SRMs to new version, including updating to SL6.
 
** Update SRMs to new version, including updating to SL6.
 
** Bring some newer disk servers ('14 generation) into service, replacing some older ('12 generation) servers.
 
** Bring some newer disk servers ('14 generation) into service, replacing some older ('12 generation) servers.
* Databases
 
** Removal of "asmlib" layer on Oracle database nodes. (Ongoing)
 
 
* Networking
 
* Networking
 
** Enable first services on production network with IPv6 once addressing scheme agreed.
 
** Enable first services on production network with IPv6 once addressing scheme agreed.
 
* Infrastructure:
 
* Infrastructure:
** Two of the chillers supplying the air-conditioning for the R89 machine room will be replaced.
+
** Two of the chillers supplying the air-conditioning for the R89 machine room are being replaced.
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ***************End Advanced warning for other interventions*************** ----->
 
<!-- ************************************************************************** ----->
 
<!-- ************************************************************************** ----->
Line 183: Line 190:
 
|-style="background:#b7f1ce"
 
|-style="background:#b7f1ce"
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 +
|-
 +
| 127332
 +
| Green
 +
| Top Priority
 +
| In Progress
 +
| 2017-03-28
 +
| 2017-03-29
 +
| LHCb
 +
| Expired host certificate on diskserver gdss828.gridpp.rl.ac.uk
 +
|-
 +
| 127251
 +
| Green
 +
| Urgent
 +
| Waiting Reply
 +
| 2017-03-21
 +
| 2017-03-27
 +
| Atlas
 +
| Encountering: TNetXNGFile::Open ERROR [ERROR] Server responded with an error: [3005] Unable to do async GET request. Disk to disk copy failed after 0 retries. Last error was : Transfer has disappeared from the scheduling system; Unknown error 1015
 
|-
 
|-
 
| 127240
 
| 127240
Line 189: Line 214:
 
| In Progress
 
| In Progress
 
| 2017-03-21
 
| 2017-03-21
| 2017-03-21
+
| 2017-03-27
 
| CMS
 
| CMS
 
| Staging Test at UK_RAL for Run2
 
| Staging Test at UK_RAL for Run2
Line 205: Line 230:
 
| Green
 
| Green
 
| Less Urgent
 
| Less Urgent
| Waiting Reply
+
| In Progress
 
| 2017-03-02
 
| 2017-03-02
 
| 2017-03-21
 
| 2017-03-21
Line 278: Line 303:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting.
 
|}
 
|}
* None yet.
+
* The jobs submission errors seen by LHCb were discussed. There may be a maximum submission rate problem. This is being followed up. One consequence is that LHCb are running fewer jobs at the RAL Tier1.
 +
* Once the Castor SRMs are updated the next update will be to take Castor to version 2.1.16. Likely timescale is this summer.
 +
* LHCb had reported an expired gridftp certificate on a Castor disk server.  The main host certificates are monitored. An action was created to sort out monitoring of these additional certificates. A similar check on additional certificates on CEPH ECHO systems needs to be added too.
 +
* CMS Job efficiencies were discussed. These are very poor at RAL - a bit worse than at other sites. Separately investigations into AAA access to date elsewhere is being followed up.
 +
* Catalin reported that ALICE access their data here using xrootd and use MD5 checksums.

Latest revision as of 15:22, 29 March 2017

RAL Tier1 Operations Report for 29th March 2017

Review of Issues during the week 22nd to 29th March 2017.
  • The Upgrade of the LHCb SRMs last Thursday (23rd March) went well. However, there were problems upgrading the Atlas ones yesterday (Tuesday 28th). This problem is now understood and the upgrade is continuing today.
  • Some batch job submission errors have been seen by CMS and LHCb. These are not yet understood.
  • Over the weekend there were two instances where a number of Tier1 systems lost connections between themselves. This was transitory and connection re-established themselves. The cause of this is not yet understood.
Resolved Disk Server Issues
  • GDSS647 (LHCbDst - D1T0) crashed in the early hours of Tuesday 28th March. The RAID card had become confused following the failure of a disk drive. It was being drained and was returned to service this morning (29th Mar).
Current operational status and issues
  • We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities but the level of failures is reduced as compared to a few weeks ago.
Ongoing Disk Server Issues
  • GDSS780 (LHCbDst - D1T0) had failed on Sunday (26th) and was put back in service the following day. However, it crashed again this morning and is currently under investigation.
Limits on concurrent batch system jobs.
  • Atlas Pilot (Analysis) 1500
  • CMS Multicore 460
Notable Changes made since the last meeting.
  • SRM Upgrades (to version version 2.1.16-10 and the OS to SL6) were carried out for LHCb last Thursday (23rd March) and are ongoing for Atlas. CMS and 'GEN' instances planned for tomorrow.
  • Work ongoing on replacing two of the chillers. The first replacement has been commissioned. The second one is due to be shipped out tomorrow (Thursday 30th March).
  • This morning (Wed 29th March) the web alias www.gridpp.rl.ac.uk was flipped to point to a new web server.
  • Out of Hours cover for the CEPH ECHO service is being piloted.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
Castor CMS and GEN instances (SRMs) SCHEDULED WARNING 30/03/2017 10:00 30/03/2017 16:30 6 hours and 30 minutes Upgrading SRM software to 2.1.16
srm-atlas.gridpp.rl.ac.uk, UNSCHEDULED WARNING 29/03/2017 09:00 29/03/2017 16:30 7 hours and 30 minutes upgrading Atlas srm software to 2.1.16
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Pending - but not yet formally announced:

  • Update Castor SRMs - ongoing.
  • Chiller replacement - work ongoing.
  • Merge AtlasScratchDisk into larger Atlas disk pool.

Listing by category:

  • Castor:
    • Update SRMs to new version, including updating to SL6.
    • Bring some newer disk servers ('14 generation) into service, replacing some older ('12 generation) servers.
  • Networking
    • Enable first services on production network with IPv6 once addressing scheme agreed.
  • Infrastructure:
    • Two of the chillers supplying the air-conditioning for the R89 machine room are being replaced.
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-atlas.gridpp.rl.ac.uk, UNSCHEDULED WARNING 29/03/2017 09:00 29/03/2017 16:30 7 hours and 30 minutes upgrading Atlas srm software to 2.1.16
srm-atlas.gridpp.rl.ac.uk, SCHEDULED WARNING 28/03/2017 11:00 28/03/2017 18:00 7 hours Risk to Atlas SRM service while we upgrade SRM to 2,1,16
srm-lhcb.gridpp.rl.ac.uk SCHEDULED OUTAGE 23/03/2017 11:00 23/03/2017 16:02 5 hours and 2 minutes Upgrade of Cator SRMs for LHCb to version 2.1.16-10
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
127332 Green Top Priority In Progress 2017-03-28 2017-03-29 LHCb Expired host certificate on diskserver gdss828.gridpp.rl.ac.uk
127251 Green Urgent Waiting Reply 2017-03-21 2017-03-27 Atlas Encountering: TNetXNGFile::Open ERROR [ERROR] Server responded with an error: [3005] Unable to do async GET request. Disk to disk copy failed after 0 retries. Last error was : Transfer has disappeared from the scheduling system; Unknown error 1015
127240 Green Urgent In Progress 2017-03-21 2017-03-27 CMS Staging Test at UK_RAL for Run2
127185 Green Urgent In Progress 2017-03-17 2017-03-17 WLGC-IPv6 readiness
126905 Green Less Urgent In Progress 2017-03-02 2017-03-21 solid finish commissioning cvmfs server for solidexperiment.org
126184 Yellow Less Urgent In Progress 2017-01-26 2017-02-07 Atlas Request of inputs for new sites monitoring
124876 Red Less Urgent On Hold 2016-11-07 2017-01-01 OPS [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2017-03-02 CASTOR at RAL not publishing GLUE 2.
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC ECHO = Atlas ECHO (Template 842);CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC Atlas HC ECHO CMS HC Comment
22/03/17 100 100 100 99 100 100 99 97 SRM test failure on GET (User timeout).
23/03/17 100 100 100 98 79 100 100 100 CMS: SRM test failure on GET (User timeout); LHCb:SRM upgrade.
24/03/17 100 100 100 98 100 100 100 100 SRM test failure on GET (User timeout).
25/03/17 100 100 100 98 96 100 100 100 CMS: SRM test failures on GET (User timeout); LHCb: Single SRM test failure - couldn't contact SRM.
26/03/17 100 100 100 97 100 100 100 100 SRM test failure on GET (User timeout).
27/03/17 100 100 98 92 96 98 94 100 Atlas: Single SRM test failure: "error 500 Command failed."; LHCb- Single SRM test failure on List (SRM_FILE_BUSY); CMS: Mainly SRM timeouts - but some CE job submission issues.
28/03/17 100 100 100 97 100 95 100 100 Some SRM test timeouts.
Notes from Meeting.
  • The jobs submission errors seen by LHCb were discussed. There may be a maximum submission rate problem. This is being followed up. One consequence is that LHCb are running fewer jobs at the RAL Tier1.
  • Once the Castor SRMs are updated the next update will be to take Castor to version 2.1.16. Likely timescale is this summer.
  • LHCb had reported an expired gridftp certificate on a Castor disk server. The main host certificates are monitored. An action was created to sort out monitoring of these additional certificates. A similar check on additional certificates on CEPH ECHO systems needs to be added too.
  • CMS Job efficiencies were discussed. These are very poor at RAL - a bit worse than at other sites. Separately investigations into AAA access to date elsewhere is being followed up.
  • Catalin reported that ALICE access their data here using xrootd and use MD5 checksums.