Difference between revisions of "Tier1 Operations Report 2017-01-04"
From GridPP Wiki
m (→) |
(→) |
||
(9 intermediate revisions by 2 users not shown) | |||
Line 9: | Line 9: | ||
|- | |- | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the fortnight 21st December 2016 to 4th January 2017. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the fortnight 21st December 2016 to 4th January 2017. | ||
− | |} | + | |} |
− | * We have had load issues on | + | Overall it was a fairly quiet holiday time operationally: |
+ | * There was a failure of a Power Distribution Unit in a rack in the UPS room in the early hours of Friday 23rd December. Staff attended on site to get the power back. This mainly affected internal services (monitoring etc) - but also the Top BDIIs. This was the second time this PDU had given problems and it was swapped out during Friday morning. | ||
+ | * We have had load issues on the CMS Castor instance throughout the holiday period which has led to repeated SAM test failures. | ||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 21: | Line 23: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | ||
|} | |} | ||
− | * | + | * GDSS671 (AliceDisk - D1T0) failed on Saturday (17th Dec) with a read-only filesystem. Three drives were replaced and the server was returned to service on the 22nd Dec. |
− | + | ||
− | + | ||
<!-- ***************************************************** -----> | <!-- ***************************************************** -----> | ||
Line 44: | Line 44: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues | ||
|} | |} | ||
− | * | + | * GDSS665 (lhcbRawRdst - D0T1) failed on Saturday 31st Dec). Investigations are ongoing. |
<!-- ***************End Ongoing Disk Server Issues**************** -----> | <!-- ***************End Ongoing Disk Server Issues**************** -----> | ||
<!-- ************************************************************* -----> | <!-- ************************************************************* -----> | ||
Line 55: | Line 55: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | ||
|} | |} | ||
− | * | + | * None |
− | + | ||
− | + | ||
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 117: | Line 115: | ||
| 6 hours | | 6 hours | ||
| Castor 2.1.15 Upgrade. Upgrade of Nameserver component. All instances affected. | | Castor 2.1.15 Upgrade. Upgrade of Nameserver component. All instances affected. | ||
+ | |- | ||
+ | | gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk, | ||
+ | | SCHEDULED | ||
+ | | OUTAGE | ||
+ | | 05/01/2017 10:00 | ||
+ | | 06/01/2017 15:00 | ||
+ | | 1 day, 5 hours | ||
+ | | ECHO Re-install | ||
|- | |- | ||
| All Castor | | All Castor | ||
Line 204: | Line 210: | ||
| In Progress | | In Progress | ||
| 2016-11-24 | | 2016-11-24 | ||
− | | | + | | 2017-01-03 |
| | | | ||
| Creation of a repository within the EGI CVMFS infrastructure | | Creation of a repository within the EGI CVMFS infrastructure | ||
Line 213: | Line 219: | ||
| On Hold | | On Hold | ||
| 2016-11-07 | | 2016-11-07 | ||
− | | | + | | 2017-01-01 |
| OPS | | OPS | ||
| [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk | | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|- | |- | ||
| 117683 | | 117683 | ||
Line 233: | Line 230: | ||
| 2016-12-07 | | 2016-12-07 | ||
| | | | ||
− | | CASTOR at RAL not publishing GLUE 2. | + | | CASTOR at RAL not publishing GLUE 2. We looked at this as planned in December (report). |
|} | |} | ||
<!-- **********************End GGUS Tickets************************** -----> | <!-- **********************End GGUS Tickets************************** -----> | ||
Line 289: | Line 286: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notes from Meeting. | ||
|} | |} | ||
− | * | + | * It was reported that CMS are seeing poor CMS efficiencies across all Tier1s. |
+ | * We had to connect to Vidyo using the phone link. Will try and get Vidyo client fixed in our meeting room. |
Latest revision as of 14:02, 4 January 2017
RAL Tier1 Operations Report for 4th January 2017
Review of Issues during the fortnight 21st December 2016 to 4th January 2017. |
Overall it was a fairly quiet holiday time operationally:
- There was a failure of a Power Distribution Unit in a rack in the UPS room in the early hours of Friday 23rd December. Staff attended on site to get the power back. This mainly affected internal services (monitoring etc) - but also the Top BDIIs. This was the second time this PDU had given problems and it was swapped out during Friday morning.
- We have had load issues on the CMS Castor instance throughout the holiday period which has led to repeated SAM test failures.
Resolved Disk Server Issues |
- GDSS671 (AliceDisk - D1T0) failed on Saturday (17th Dec) with a read-only filesystem. Three drives were replaced and the server was returned to service on the 22nd Dec.
Current operational status and issues |
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
Ongoing Disk Server Issues |
- GDSS665 (lhcbRawRdst - D0T1) failed on Saturday 31st Dec). Investigations are ongoing.
Notable Changes made since the last meeting. |
- None
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Castor CMS instance | SCHEDULED | OUTAGE | 31/01/2017 10:00 | 31/01/2017 16:00 | 6 hours | Castor 2.1.15 Upgrade. Only affecting CMS instance. (CMS stager component being upgraded). |
Castor GEN instance | SCHEDULED | OUTAGE | 26/01/2017 10:00 | 26/01/2017 16:00 | 6 hours | Castor 2.1.15 Upgrade. Only affecting GEN instance. (GEN stager component being upgraded). |
Castor Atlas instance | SCHEDULED | OUTAGE | 24/01/2017 10:00 | 24/01/2017 16:00 | 6 hours | Castor 2.1.15 Upgrade. Only affecting Atlas instance. (Atlas stager component being upgraded). |
Castor LHCb instance | SCHEDULED | OUTAGE | 17/01/2017 10:00 | 17/01/2017 16:00 | 6 hours | Castor 2.1.15 Upgrade. Only affecting LHCb instance. (LHCb stager component being upgraded). |
All Castor (all SRM endpoints) | SCHEDULED | OUTAGE | 10/01/2017 10:00 | 10/01/2017 16:00 | 6 hours | Castor 2.1.15 Upgrade. Upgrade of Nameserver component. All instances affected. |
gridftp.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, s3.echo.stfc.ac.uk, xrootd.echo.stfc.ac.uk, | SCHEDULED | OUTAGE | 05/01/2017 10:00 | 06/01/2017 15:00 | 1 day, 5 hours | ECHO Re-install |
All Castor | SCHEDULED | OUTAGE | 05/01/2017 09:30 | 05/01/2017 17:00 | 7 hours and 30 minutes | Outage of Castor Storage System for patching |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Pending - but not yet formally announced:
- Merge AtlasScratchDisk into larger Atlas disk pool.
Listing by category:
- Castor:
- Update to Castor version 2.1.15. Dates announced via GOC DB for early 2017.
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
- Fabric
- Firmware updates on older disk servers.
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
lcgbdii.gridpp.rl.ac.uk | UNSCHEDULED | OUTAGE | 23/12/2016 01:30 | 23/12/2016 03:30 | 2 hours | Networking problems affecting part of the Tier-1 service. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
125480 | Green | Less Urgent | On Hold | 2016-12-09 | 2016-12-21 | total Physical and Logical CPUs values | |
125157 | Green | Less Urgent | In Progress | 2016-11-24 | 2017-01-03 | Creation of a repository within the EGI CVMFS infrastructure | |
124876 | Yellow | Less Urgent | On Hold | 2016-11-07 | 2017-01-01 | OPS | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk |
117683 | Red | Less Urgent | On Hold | 2015-11-18 | 2016-12-07 | CASTOR at RAL not publishing GLUE 2. We looked at this as planned in December (report). |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 808); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
21/12/16 | 100 | 100 | 100 | 96 | 100 | N/A | 100 | SRM test failures - User timeout |
22/12/16 | 100 | 100 | 100 | 100 | 100 | N/A | 100 | |
23/12/16 | 100 | 100 | 100 | 88 | 100 | N/A | N/A | SRM test failures - User timeout |
24/12/16 | 100 | 100 | 100 | 96 | 100 | N/A | 100 | SRM test failures - User timeout |
25/12/16 | 100 | 100 | 100 | 97 | 100 | N/A | 100 | SRM test failures - User timeout |
26/12/16 | 100 | 100 | 100 | 97 | 100 | N/A | 100 | SRM test failures - User timeout |
27/12/16 | 100 | 100 | 100 | 96 | 100 | N/A | N/A | SRM test failures - User timeout |
28/12/16 | 100 | 100 | 100 | 79 | 100 | N/A | 100 | SRM test failures - User timeout |
29/12/16 | 100 | 100 | 97 | 62 | 100 | N/A | 100 | CMS: Block of SRM test failures - User timeout; Atlas: single SRM test failure ( could not open connection to srm-atlas.gridpp.rl.ac.uk). |
30/12/16 | 100 | 100 | 100 | 95 | 100 | N/A | 100 | SRM test failures - User timeout |
31/12/16 | 100 | 100 | 100 | 97 | 100 | N/A | 100 | SRM test failures - User timeout |
01/01/17 | 100 | 100 | 100 | 99 | 100 | N/A | 100 | SRM test failures - User timeout |
02/01/17 | 100 | 100 | 100 | 96 | 100 | N/A | 100 | SRM test failures - User timeout |
03/01/17 | 100 | 100 | 100 | 94 | 100 | N/A | 100 | SRM test failures - User timeout |
Notes from Meeting. |
- It was reported that CMS are seeing poor CMS efficiencies across all Tier1s.
- We had to connect to Vidyo using the phone link. Will try and get Vidyo client fixed in our meeting room.