Difference between revisions of "Tier1 Operations Report 2018-01-10"
From GridPP Wiki
(→) |
|||
(39 intermediate revisions by 2 users not shown) | |||
Line 11: | Line 11: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 3rd January 2018 to 10th January 2018 | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 3rd January 2018 to 10th January 2018 | ||
|} | |} | ||
− | * | + | * Disk server gdss745 (AtlasDataDisk - D1T0) has failed with loss of all data on the server. The problem was triggered by a failed drive. However, errors seen on other disk drives while this one was rebuilding led to loss of the RAID6 array. There were around 960,000 files on the server - around half of which were unique. A post mortem investigation of this incident will be carried out. |
− | * | + | * There was a problem with the Castor GEN instance yesterday morning (Tuesday 9th). Staff restarted some of the Castor process and the service was restored by the end of the morning. |
− | * | + | * Patching ongoing for recent security issues. |
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 24: | Line 24: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues | ||
|} | |} | ||
− | * | + | * Ongoing security patching. |
<!-- ***********End Current operational status and issues*********** -----> | <!-- ***********End Current operational status and issues*********** -----> | ||
<!-- *************************************************************** -----> | <!-- *************************************************************** -----> | ||
Line 35: | Line 35: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Castor Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Castor Disk Server Issues | ||
|} | |} | ||
− | * gdss736 (LHCb) | + | * gdss736 (LHCb - D1T0) - Has been rebuilt and returned to production |
− | * gdss756 (CMS) | + | * gdss756 (CMS - D1T0) - Disk server crashed double disk failure port 24,port 28, port 3 and port 8 |
− | + | ||
<!-- ***************************************************** -----> | <!-- ***************************************************** -----> | ||
Line 47: | Line 46: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Castor Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Castor Disk Server Issues | ||
|} | |} | ||
− | * | + | * None |
<!-- ***************End Ongoing Disk Server Issues**************** -----> | <!-- ***************End Ongoing Disk Server Issues**************** -----> | ||
<!-- ************************************************************* -----> | <!-- ************************************************************* -----> | ||
Line 69: | Line 68: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | ||
|} | |} | ||
− | * The | + | * The termination of the WMS service had been announced - with a drain (i.e. not accepting new jobs) planned to start on 1st February. However, as the WMS is not being used by VOs at the moment and security patches need to be applied urgently the drain was brought forward and started this morning (10th Jan). |
− | * | + | * Other security patching done or underway. |
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 92: | Line 91: | ||
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB | | style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Declared in the GOC DB | ||
|} | |} | ||
− | + | {| border=1 align=center | |
+ | |- bgcolor="#7c8aaf" | ||
+ | ! Service | ||
+ | ! Scheduled? | ||
+ | ! Outage/At Risk | ||
+ | ! Start | ||
+ | ! End | ||
+ | ! Duration | ||
+ | ! Reason | ||
+ | |- | ||
+ | | WMS Service: lcglb01, lcglb02, lcgwms04, lcgwms05 | ||
+ | | SCHEDULED | ||
+ | | OUTAGE | ||
+ | | 12/01/2018 10:00 | ||
+ | | 19/01/2018 12:00 | ||
+ | | 7 days, 2 hours | ||
+ | | WMS Decommissioning RAL Tier1 | ||
+ | |} | ||
<!-- **********************End GOC DB Entries************************** -----> | <!-- **********************End GOC DB Entries************************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 120: | Line 136: | ||
** DNS servers will be rolled out within the Tier1 network. | ** DNS servers will be rolled out within the Tier1 network. | ||
** Infrastructure | ** Infrastructure | ||
− | * Testing of power distribution boards in the R89 machine room is being scheduled for some time | + | * Testing of power distribution boards in the R89 machine room is being scheduled for some time late July / Early August. The effect of this on our services is being discussed. |
<!-- ***************End Advanced warning for other interventions*************** -----> | <!-- ***************End Advanced warning for other interventions*************** -----> | ||
<!-- ************************************************************************** -----> | <!-- ************************************************************************** -----> | ||
Line 268: | Line 284: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Hammercloud Test Report | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Hammercloud Test Report | ||
|} | |} | ||
− | Key: Atlas HC = Atlas HammerCloud (Queue | + | Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_UCORE, Template 841); CMS HC = CMS HammerCloud |
{|border="1" cellpadding="1",center; | {|border="1" cellpadding="1",center; | ||
|+ | |+ | ||
Line 274: | Line 290: | ||
! Day !! Atlas HC !! CMS HC !! Comment | ! Day !! Atlas HC !! CMS HC !! Comment | ||
|- | |- | ||
− | | 03/01/18 || 100 || 100 || | + | | 03/01/18 || 100 || 100 || |
|- | |- | ||
− | | 04/01/18 || 100 || 100 || | + | | 04/01/18 || 100 || 100 || |
|- | |- | ||
− | | 05/01/18 || 100 || 100 || | + | | 05/01/18 || 100 || 100 || |
|- | |- | ||
− | | 06/01/18 || 100 || 100 || | + | | 06/01/18 || 100 || 100 || |
|- | |- | ||
− | | 07/01/18 || 100 || 100 || | + | | 07/01/18 || 100 || 100 || |
|- | |- | ||
− | | 08/01/18 || 100 || 100 || | + | | 08/01/18 || 100 || 100 || |
|- | |- | ||
− | | 09/01/18 || 100 || 100 || | + | | 09/01/18 || 100 || 100 || |
|} | |} | ||
Line 299: | Line 315: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting. | ||
|} | |} | ||
− | * | + | * None yet. |
− | + |
Latest revision as of 13:08, 17 January 2018
RAL Tier1 Operations Report for 10th January 2018
Review of Issues during the week 3rd January 2018 to 10th January 2018 |
- Disk server gdss745 (AtlasDataDisk - D1T0) has failed with loss of all data on the server. The problem was triggered by a failed drive. However, errors seen on other disk drives while this one was rebuilding led to loss of the RAID6 array. There were around 960,000 files on the server - around half of which were unique. A post mortem investigation of this incident will be carried out.
- There was a problem with the Castor GEN instance yesterday morning (Tuesday 9th). Staff restarted some of the Castor process and the service was restored by the end of the morning.
- Patching ongoing for recent security issues.
Current operational status and issues |
- Ongoing security patching.
Resolved Castor Disk Server Issues |
- gdss736 (LHCb - D1T0) - Has been rebuilt and returned to production
- gdss756 (CMS - D1T0) - Disk server crashed double disk failure port 24,port 28, port 3 and port 8
Ongoing Castor Disk Server Issues |
- None
Limits on concurrent batch system jobs. |
- CMS Multicore 550
Notable Changes made since the last meeting. |
- The termination of the WMS service had been announced - with a drain (i.e. not accepting new jobs) planned to start on 1st February. However, as the WMS is not being used by VOs at the moment and security patches need to be applied urgently the drain was brought forward and started this morning (10th Jan).
- Other security patching done or underway.
Entries in GOC DB starting since the last report. |
No downtime scheduled in the GOCDB between 2017-12-20 and 2018-01-03
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
WMS Service: lcglb01, lcglb02, lcgwms04, lcgwms05 | SCHEDULED | OUTAGE | 12/01/2018 10:00 | 19/01/2018 12:00 | 7 days, 2 hours | WMS Decommissioning RAL Tier1 |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Ongoing or Pending - but not yet formally announced:
Listing by category:
- Castor:
- Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
- Move to generic Castor headnodes.
- Echo:
- Update to next CEPH version ("Luminous").
- Networking
- Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
- Services
- Internal
- DNS servers will be rolled out within the Tier1 network.
- Infrastructure
- Testing of power distribution boards in the R89 machine room is being scheduled for some time late July / Early August. The effect of this on our services is being discussed.
Open GGUS Tickets (Snapshot during morning of meeting) |
Ticket-ID | Type | VO | Notified Site | Resp. Unit | Status | Priority | Creation | Last Update | ToI | Subject |
---|---|---|---|---|---|---|---|---|---|---|
132748 | USER | ops | RAL-LCG2 | NGI_UK | in progress | less urgent | 2018-01-08 14:45:00 | 2018-01-09 09:58:00 | Operations | [Rod Dashboard] Issues detected at RAL-LCG2 |
132712 | USER | other | RAL-LCG2 | NGI_UK assign to:lcg-support@gridpp.rl.ac.uk | in progress | less urgent | 2018-01-04 16:22:00 | 2018-01-08 13:35:00 | Other | support for the hyperk VO (RAL-LCG2) |
132589 | TEAM | lhcb | RAL-LCG2 | NGI_UK | in progress | very urgent | 2017-12-21 06:45:00 | 2018-01-03 15:21:00 | Local Batch System | Killed pilots at RAL |
131815 | USER | t2k.org | RAL-LCG2 | NGI_UK | in progress | less urgent | 2017-11-13 14:42:00 | 2017-12-01 19:30:00 | Storage Systems | Extremely long download times for T2K files on tape at RAL |
127597 | USER | cms | RAL-LCG2 | NGI_UK assign to:lcg-support@gridpp.rl.ac.uk share with:sexton@fnal.gov | on hold | urgent | 2017-04-07 10:34:00 | 2017-10-05 09:14:00 | File Transfer | Check networking and xrootd RAL-CERN performance |
124876 | USER | ops | RAL-LCG2 | NGI_UK assign to:lcg-support@gridpp.rl.ac.uk | on hold | less urgent | 2016-11-07 12:06:00 | 2017-11-13 16:55:00 | Operations | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk |
117683 | USER | none | RAL-LCG2 | NGI_UK assign to:lcg-support@gridpp.rl.ac.uk | on hold | less urgent | 2015-11-18 11:36:00 | 2018-01-03 15:26:00 | Information System | CASTOR at RAL not publishing GLUE 2 |
Availability Report |
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas Echo | Comment |
---|---|---|---|---|---|---|---|
03/01/18 | 100 | 100 | 100 | 100 | 100 | 100 | |
04/01/18 | 100 | 100 | 100 | 100 | 100 | 100 | |
05/01/18 | 100 | 100 | 100 | 100 | 100 | 100 | |
06/01/18 | 100 | 100 | 100 | 100 | 100 | 100 | |
07/01/18 | 100 | 100 | 100 | 100 | 100 | 100 | |
08/01/18 | 100 | 100 | 100 | 100 | 100 | 100 | |
09/01/18 | 100 | 100 | 100 | 100 | 100 | 100 |
Hammercloud Test Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_UCORE, Template 841); CMS HC = CMS HammerCloud
Day | Atlas HC | CMS HC | Comment |
---|---|---|---|
03/01/18 | 100 | 100 | |
04/01/18 | 100 | 100 | |
05/01/18 | 100 | 100 | |
06/01/18 | 100 | 100 | |
07/01/18 | 100 | 100 | |
08/01/18 | 100 | 100 | |
09/01/18 | 100 | 100 |
Notes from Meeting. |
- None yet.