Difference between revisions of "Tier1 Operations Report 2017-12-06"

From GridPP Wiki
Jump to: navigation, search
()
()
 
(10 intermediate revisions by one user not shown)
Line 10: Line 10:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 30th November to 6th December 2017.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 30th November to 6th December 2017.
 
|}
 
|}
* The most recent Trust Anchor release included a change to the way a UK CA certificate is signed. This has caused problems for some old versions of code that checks certificates. The Tier1 systems were updated and then, on learning about problems elsewhere, downgraded. The FTS service was seen to have problems with the upgraded certificate and downgrading cured most (but not all) of those problems. The Tier1 will re-do the upgrade next Tuesday (12th). We will keep a careful watch on services such as the FTS which may have a problem with the new UK CA certificate.
+
'''Castor:'''
* There was a problem of high packet loss for traffic to/from the Tier that passed through the RAL core network (and firewall) on Monday (4th). The problem started at midnight and was fixed around 15:30.
+
• On Wednesday (6th Dec) all the SRMs systems (except LHCb – which had already been done) were successfully upgraded to the latest version (2.1.16-18)
* LHCb D1T0 disk space in Castor has been very full during this last week. Discussions with LHCb are ongoing about the next steps with this problem.
+
• Three disk servers (old ones from 2012) have been added to the LHCb Disk-only space in Castor to alleviate problems of this area being too full.
 +
 
 +
'''Echo:'''
 +
The maximum number of gridftp connections to each Echo gateways has been increased to 200 (from 100).
 +
• Echo is running normally. Background scrubbing is going on. This is flushing out bad disks – and the rate at which it finds these is expected to drop over the next week or two. The plan is to run like this through the holiday period.  
 +
 
 +
'''Services:'''
 +
• EGI will withdraw support for the WMS from the start of 2018. Our WMS service will be stopped on this timescale.
 +
 
 +
'''Network:'''
 +
There was a problem of high packet loss for traffic to/from the Tier that passed through the RAL core network (and firewall) on Monday (4th). The problem started at midnight and was fixed around 15:30.
 +
 
 +
'''Infrastructure:'''
 +
• Following the failure of the generator to start during the power outage of a couple of weeks ago a faulty emergency power-off switch was found and has been replaced. Planes are being made for a generator load test – hopefully on Wednesday (13th Dec).
 +
 
 +
'''Certificates:'''
 +
• Following problems with the updated UK CA certificate in the IGTF 1.88 rollout we had updated and then rolled back. This had left is with some issues in our configuration/deployment system (Quattor/Aquilon) – but those were resolved quickly. We made a plan to roll forward again tomorrow (12th Dec) – and that is still the plan.
 +
 
 +
'''Christmas Plans:'''
 +
• We will follow the same pattern as in previous years. RAL is closed after Friday afternoon 22nd December and will re-open on Tuesday morning 2nd January. The on-call team will be in place throughout the holiday as usual. Some additional checks will be made by those on-call. Furthermore support will be limited on Christmas Day, Boxing Day and New Year’s Day.  
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 34: Line 53:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* GDSS746 (AtlasDataDisk - D1T0) - Back in production
+
* None
* GDSS753 (AtlasDataDisk - D1T0) - Back in production
+
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
  
Line 45: Line 63:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
|}
 
|}
* GDSS757 (CMSDisk - D1T0) - Connection refused or timed out
+
* None
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 292: Line 310:
 
! Day !! Atlas HC !! Atlas HC Echo !! CMS HC !! Comment
 
! Day !! Atlas HC !! Atlas HC Echo !! CMS HC !! Comment
 
|-
 
|-
| 29/11/17 || style="background-color: yellow;" | 85 || 100|| 100 ||  
+
| 6/12/17 || style="background-color: yellow;" | 99 || style="background-color: yellow;" | 99|| style="background-color: yellow;" | 81 || Atlas HC Echo - No test run in time bin
 
|-
 
|-
| 30/11/17 || style="background-color: yellow;" | 98 || 100 || 100 ||  
+
| 7/12/17 || style="background-color: yellow;" | 89 || 100 || 100 || Atlas HC Echo - No test run in time bin
 
|-
 
|-
| 1/12/17 || 100  || 100 || 100 ||  
+
| 8/12/17 || 100  || 100 || 100 || Atlas HC Echo - No test run in time bin
 
|-
 
|-
| 2/12/17 || 100 || 100 || 100 ||  
+
| 9/12/17 || 100 || 0 || 100 || Atlas HC Echo - No test run in time bin
 
|-
 
|-
| 3/12/17 || style="background-color: yellow;" | 99 || 100 || 100 ||  
+
| 10/12/17 || 100 || 0 || 100 || Atlas HC Echo - No test run in time bin
 
|-
 
|-
| 4/12/17 || style="background-color: yellow;" | 99 || 100 || style="background-color: yellow;" | 86 ||
+
| 11/12/17 || 100 || 0 || 100 || Atlas HC Echo - No test run in time bin
 
|-
 
|-
| 5/12/17 || 100 || 100|| 100 ||  
+
| 12/12/17 || style="background-color: yellow;" | 99 || 0|| 100 || Atlas HC Echo - No test run in time bin
 
|}
 
|}
 
<!-- **********************End Hammercloud Test Report************************** ----->
 
<!-- **********************End Hammercloud Test Report************************** ----->

Latest revision as of 08:35, 13 December 2017

RAL Tier1 Operations Report for 6th December 2017

Review of Issues during the week 30th November to 6th December 2017.

Castor: • On Wednesday (6th Dec) all the SRMs systems (except LHCb – which had already been done) were successfully upgraded to the latest version (2.1.16-18) • Three disk servers (old ones from 2012) have been added to the LHCb Disk-only space in Castor to alleviate problems of this area being too full.

Echo: • The maximum number of gridftp connections to each Echo gateways has been increased to 200 (from 100). • Echo is running normally. Background scrubbing is going on. This is flushing out bad disks – and the rate at which it finds these is expected to drop over the next week or two. The plan is to run like this through the holiday period.

Services: • EGI will withdraw support for the WMS from the start of 2018. Our WMS service will be stopped on this timescale.

Network: • There was a problem of high packet loss for traffic to/from the Tier that passed through the RAL core network (and firewall) on Monday (4th). The problem started at midnight and was fixed around 15:30.

Infrastructure: • Following the failure of the generator to start during the power outage of a couple of weeks ago a faulty emergency power-off switch was found and has been replaced. Planes are being made for a generator load test – hopefully on Wednesday (13th Dec).

Certificates: • Following problems with the updated UK CA certificate in the IGTF 1.88 rollout we had updated and then rolled back. This had left is with some issues in our configuration/deployment system (Quattor/Aquilon) – but those were resolved quickly. We made a plan to roll forward again tomorrow (12th Dec) – and that is still the plan.

Christmas Plans: • We will follow the same pattern as in previous years. RAL is closed after Friday afternoon 22nd December and will re-open on Tuesday morning 2nd January. The on-call team will be in place throughout the holiday as usual. Some additional checks will be made by those on-call. Furthermore support will be limited on Christmas Day, Boxing Day and New Year’s Day.

Current operational status and issues
Resolved Disk Server Issues
  • None
Ongoing Disk Server Issues
  • None
Limits on concurrent batch system jobs.
  • CMS Multicore 550
Notable Changes made since the last meeting.
  • Allocation in Echo for ATLAS increased to 4.1PB. They now have 4PB in datadisk and 100TB in scratchdisk. This is part of the gradual increase of their usage to 5.1PB.
  • The maximum number of gridftp connections to each Echo gateways has been increased to 200 (from 100).
Entries in GOC DB starting since the last report.
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-alice.gridpp.rl.ac.uk, srm-atlas.gridpp.rl.ac.uk, srm-biomed.gridpp.rl.ac.uk, srm-cert.gridpp.rl.ac.uk, srm-cms-disk.gridpp.rl.ac.uk, srm-cms.gridpp.rl.ac.uk, srm-dteam.gridpp.rl.ac.uk, srm-ilc.gridpp.rl.ac.uk, srm-mice.gridpp.rl.ac.uk, srm-minos.gridpp.rl.ac.uk, srm-na62.gridpp.rl.ac.uk, srm-pheno.gridpp.rl.ac.uk, srm-preprod.gridpp.rl.ac.uk, srm-snoplus.gridpp.rl.ac.uk, srm-solid.gridpp.rl.ac.uk, srm-t2k.gridpp.rl.ac.uk, SCHEDULED OUTAGE 06/12/2017 13:00 06/12/2017 15:00 2 hours Upgrade of non-LHCb SRM to version 2.1.16-18
lcgfts3.gridpp.rl.ac.uk, SCHEDULED WARNING 05/12/2017 11:00 05/12/2017 13:00 2 hours FTS update to v3.7.7
Declared in the GOC DB
  • None
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.

Ongoing or Pending - but not yet formally announced:

Listing by category:

  • Castor:
    • Update systems (initially tape servers) to use SL7 and configured by Quattor/Aquilon.
    • Move to generic Castor headnodes.
  • Echo:
    • Update to next CEPH version ("Luminous").
  • Networking
    • Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
  • Services
  • Internal
    • DNS servers will be rolled out within the Tier1 network.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
132356 Green Very Urgent Waiting for Reply 2017-12-07 2017-12-11 Ops [Rod Dashboard] Issue detected : org.nagios.GLUE2-Check@site-bdii.gridpp.rl.ac.uk
132336 Green Less Urgent In Progress 2017-12-05 2017-12-06 Ops [Rod Dashboard] Issue detected : org.nagios.GLUE2-Check@site-bdii.gridpp.rl.ac.uk
132314 Green Less Urgent In Progress 2017-12-05 2017-12-11 Ops [Rod Dashboard] Issue detected : org.nordugrid.ARC-CE-SRM-result-ops@arc-ce02.gridpp.rl.ac.uk
132222 Green Urgent In Progress 2017-11-30 2017-12-05 CMS Transfers failing to T1_UK_RAL_Disk
131840 Green Urgent Waiting for reply 2017-11-14 2017-12-05 Other solidexperiment.org CASTOR tape copy fails
131815 Green Less Urgent In Progress 2017-11-13 2017-12-01 T2K.Org Extremely long download times for T2K files on tape at RAL
130207 Red Urgent On Hold 2017-08-24 2017-11-13 MICE Timeouts when copyiing MICE reco data to CASTOR
127597 Red Urgent On Hold 2017-04-07 2017-10-05 CMS Check networking and xrootd RAL-CERN performance
124876 Red Less Urgent On Hold 2016-11-07 2017-11-13 Ops [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
117683 Red Less Urgent On Hold 2015-11-18 2017-11-06 None CASTOR at RAL not publishing GLUE 2
Availability Report
Day OPS Alice Atlas CMS LHCb Atlas Echo Comment
6/12/17 100 100 83 81 100 100
7/12/17 100 100 100 100 100 100
8/12/17 100 100 100 100 100 100
9/12/17 100 100 100 100 100 100
10/12/17 100 100 100 100 100 100
11/12/17 100 100 100 100 100 100
12/12/17 100 100 100 100 100 100
Hammercloud Test Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud

Day Atlas HC Atlas HC Echo CMS HC Comment
6/12/17 99 99 81 Atlas HC Echo - No test run in time bin
7/12/17 89 100 100 Atlas HC Echo - No test run in time bin
8/12/17 100 100 100 Atlas HC Echo - No test run in time bin
9/12/17 100 0 100 Atlas HC Echo - No test run in time bin
10/12/17 100 0 100 Atlas HC Echo - No test run in time bin
11/12/17 100 0 100 Atlas HC Echo - No test run in time bin
12/12/17 99 0 100 Atlas HC Echo - No test run in time bin
Notes from Meeting.
  • EGI will withdraw support for the WMS from the end of 2017. Our WMS service will be stopped on this timescale.
  • There is a problem with Perfsonar measurements using IPv6 to nodes accessed via JANET.
  • There was a discussion about how best to bring files back online from tape. The MICE VO needs a better (bulk) solution than they are using at the moment.