Difference between revisions of "Tier1 Operations Report 2016-12-21"
From GridPP Wiki
(→) |
(→) |
||
(9 intermediate revisions by one user not shown) | |||
Line 3: | Line 3: | ||
__NOTOC__ | __NOTOC__ | ||
====== ====== | ====== ====== | ||
+ | |||
+ | '''The Tier1 will stay operational over the Christmas and New Year holiday. Details of the plans are available on our blog [http://www.gridpp.rl.ac.uk/blog/2016/12/15/ral-tier1-plans-for-christmas-new-year-holiday-201617/ here].''' | ||
+ | |||
<!-- ************************************************************* -----> | <!-- ************************************************************* -----> | ||
Line 10: | Line 13: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 14th to 21st December 2016. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 14th to 21st December 2016. | ||
|} | |} | ||
− | * | + | * We have had load issues on CMSTape in Castor which has led to some SAM test failures. |
− | + | ||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 190: | Line 192: | ||
|-style="background:#b7f1ce" | |-style="background:#b7f1ce" | ||
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject | ! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|- | |- | ||
| 125480 | | 125480 | ||
| Green | | Green | ||
| Less Urgent | | Less Urgent | ||
− | | | + | | On Hold |
| 2016-12-09 | | 2016-12-09 | ||
− | | 2016-12- | + | | 2016-12-21 |
| | | | ||
| total Physical and Logical CPUs values | | total Physical and Logical CPUs values | ||
Line 228: | Line 212: | ||
|- | |- | ||
| 124876 | | 124876 | ||
− | | | + | | Yellow |
| Less Urgent | | Less Urgent | ||
| On Hold | | On Hold | ||
Line 244: | Line 228: | ||
| CMS | | CMS | ||
| Consistency Check for T1_UK_RAL | | Consistency Check for T1_UK_RAL | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|- | |- | ||
| 117683 | | 117683 | ||
Line 303: | Line 278: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting. | ||
|} | |} | ||
− | * | + | * Catalin reported that he had closed a couple of GGUS tickets relating to the WMS service. With thanks to Daniela for help with these. Given the limited support and usage for the WMS it proposed that Catalin sends in a bug report (with workaround) but does not need to follow this up. |
+ | * The Castor 2.1.15 upgrade planned for the new year will also eliminate the requirement for SL5 which is currently used for the SRMs. | ||
+ | * Brian is in contact with the DIRAC Cambridge site ahead of them transferring data to us. | ||
+ | * There is a proposal to change the notification periods required for scheduled downtimes (in WLCG). Jeremy summarized this. |
Latest revision as of 14:30, 21 December 2016
RAL Tier1 Operations Report for 21st December 2016
The Tier1 will stay operational over the Christmas and New Year holiday. Details of the plans are available on our blog here.
Review of Issues during the week 14th to 21st December 2016. |
- We have had load issues on CMSTape in Castor which has led to some SAM test failures.
Resolved Disk Server Issues |
- GDSS677 (CMSTape - D0T1) failed late evening on 14th Dec. The following day it was put back in service and seven of the eight files awaiting migration to tape were flushed off it. One file was declared lost to CMS.
- GDSS822 (AtlasDataDisk - D1T0) failed in the early hours of Tuesday 20th Dec. It was returned to service late afternoon the same day.Unclear why sever had stopped.
- GDSS749 (AtlasDataDisk - D1T0) was unresponsive during Tuesday evening (20th Dec) and was taken out of production. It was found that a disk had failed - and setting that disk out of the RAID array allowed the server to work normally again. The server was put back in production later that evening. It was left in read-only mode overnight.
Current operational status and issues |
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
Ongoing Disk Server Issues |
- GDSS671 (AliceDisk - D1T0) failed on Saturday (17th Dec) with a read-only filesystem. Two failing drives have been identified.
Notable Changes made since the last meeting. |
- Last Wed (14th Dec) Firmware updates were applied to the RAID cards in the Clustervision '13 batch of disk servers.
- A modification to the weighting of the different Castor servers was made on Monday to reduced the available number of 'slots' for the old Clustervision '11 disk servers in D0T1 instances.
- Changes to the FTS services (test and production) to fix a certificate problem that was triggered by a recent FTS update.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Castor CMS instance | SCHEDULED | OUTAGE | 31/01/2017 10:00 | 31/01/2017 16:00 | 6 hours | Castor 2.1.15 Upgrade. Only affecting CMS instance. (CMS stager component being upgraded). |
Castor GEN instance | SCHEDULED | OUTAGE | 26/01/2017 10:00 | 26/01/2017 16:00 | 6 hours | Castor 2.1.15 Upgrade. Only affecting GEN instance. (GEN stager component being upgraded). |
Castor Atlas instance | SCHEDULED | OUTAGE | 24/01/2017 10:00 | 24/01/2017 16:00 | 6 hours | Castor 2.1.15 Upgrade. Only affecting Atlas instance. (Atlas stager component being upgraded). |
Castor LHCb instance | SCHEDULED | OUTAGE | 17/01/2017 10:00 | 17/01/2017 16:00 | 6 hours | Castor 2.1.15 Upgrade. Only affecting LHCb instance. (LHCb stager component being upgraded). |
All Castor (all SRM endpoints) | SCHEDULED | OUTAGE | 10/01/2017 10:00 | 10/01/2017 16:00 | 6 hours | Castor 2.1.15 Upgrade. Upgrade of Nameserver component. All instances affected. |
All Castor | SCHEDULED | OUTAGE | 05/01/2017 09:30 | 05/01/2017 17:00 | 7 hours and 30 minutes | Outage of Castor Storage System for patching |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Pending - but not yet formally announced:
- Merge AtlasScratchDisk into larger Atlas disk pool.
Listing by category:
- Castor:
- Update to Castor version 2.1.15. Dates announced via GOC DB for early 2017.
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
- Fabric
- Firmware updates on older disk servers.
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
srm-atlas.gridpp.rl.ac.uk, srm-cms-disk.gridpp.rl.ac.uk, srm-lhcb.gridpp.rl.ac.uk, | SCHEDULED | WARNING | 14/12/2016 10:00 | 14/12/2016 17:00 | 7 hours | Warning while some disk servers have rolling firmware updates and are rebooted. Temporary loss of access to files. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
125480 | Green | Less Urgent | On Hold | 2016-12-09 | 2016-12-21 | total Physical and Logical CPUs values | |
125157 | Green | Less Urgent | In Progress | 2016-11-24 | 2016-12-07 | Creation of a repository within the EGI CVMFS infrastructure | |
124876 | Yellow | Less Urgent | On Hold | 2016-11-07 | 2016-11-21 | OPS | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk |
124606 | Red | Top Priority | Waiting for Reply | 2016-10-24 | 2016-12-09 | CMS | Consistency Check for T1_UK_RAL |
117683 | Red | Less Urgent | On Hold | 2015-11-18 | 2016-12-07 | CASTOR at RAL not publishing GLUE 2. Plan to revisit week starting 19th. |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 808); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
14/12/16 | 100 | 100 | 100 | 44 | 100 | N/A | 100 | Load problem on CMS_Tape causing multiple test failures. |
15/12/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
16/12/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
17/12/16 | 100 | 100 | 100 | 99 | 100 | N/A | 99 | Single SRM test failure: User timeout over |
18/12/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
19/12/16 | 100 | 100 | 100 | 96 | 100 | N/A | 100 | Several SRM test failures: User timeout over |
20/12/16 | 100 | 100 | 100 | 93 | 100 | N/A | 100 | Several SRM test failures: User timeout over |
Notes from Meeting. |
- Catalin reported that he had closed a couple of GGUS tickets relating to the WMS service. With thanks to Daniela for help with these. Given the limited support and usage for the WMS it proposed that Catalin sends in a bug report (with workaround) but does not need to follow this up.
- The Castor 2.1.15 upgrade planned for the new year will also eliminate the requirement for SL5 which is currently used for the SRMs.
- Brian is in contact with the DIRAC Cambridge site ahead of them transferring data to us.
- There is a proposal to change the notification periods required for scheduled downtimes (in WLCG). Jeremy summarized this.