Difference between revisions of "Tier1 Operations Report 2017-04-26"
From GridPP Wiki
(→) |
(→) |
||
(3 intermediate revisions by one user not shown) | |||
Line 1: | Line 1: | ||
− | ==RAL Tier1 Operations Report for 26th | + | ==RAL Tier1 Operations Report for 26th April 2017== |
__NOTOC__ | __NOTOC__ | ||
Line 10: | Line 10: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 19th to 26th April 2017. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 19th to 26th April 2017. | ||
|} | |} | ||
− | * Following the reversion of the LHCb Castor SRMs on Wednesday 12th April Castor for LHCb has worked much better although | + | * Following the reversion of the LHCb Castor SRMs on Wednesday 12th April Castor for LHCb has worked much better although some significant problems have remained. The total load on LHCb Castor has had to be managed by LHCb and the there have been times when the SRMs clogged up requiring manual intervention to restart things. Nevertheless the appreciable backlog of LHCb stripping and merging jobs that had built up has been largely eliminated. |
− | * There was a problem with Castor for CMS overnight last night (Tuesday/Wednesday) when the database reported blocking sessions. This fixed itself after a couple of hours. | + | * There was a problem with Castor for CMS overnight last night (Tuesday/Wednesday 25/26 April) when the database reported blocking sessions. This fixed itself after a couple of hours. |
− | * There was a | + | * There was a problems for Atlas (as shown in the Hammercloud test results below) over the weekend when AtalsScratchDisk filled up. This space is managed by Atlas. |
* We are also seeing a non-negligible failure rate for CE SAM tests for CMS that is not yet understood. | * We are also seeing a non-negligible failure rate for CE SAM tests for CMS that is not yet understood. | ||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
Line 35: | Line 35: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues | ||
|} | |} | ||
− | * We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities | + | * We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. |
<!-- ***********End Current operational status and issues*********** -----> | <!-- ***********End Current operational status and issues*********** -----> | ||
<!-- *************************************************************** -----> | <!-- *************************************************************** -----> | ||
Line 102: | Line 102: | ||
** Update to version 2.1.16 | ** Update to version 2.1.16 | ||
* Networking | * Networking | ||
− | ** Enable first services on production network with IPv6 now that the addressing scheme has been agreed. | + | ** Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6). |
− | + | ||
− | + | ||
<!-- ***************End Advanced warning for other interventions*************** -----> | <!-- ***************End Advanced warning for other interventions*************** -----> | ||
<!-- ************************************************************************** -----> | <!-- ************************************************************************** -----> |
Latest revision as of 10:30, 26 April 2017
RAL Tier1 Operations Report for 26th April 2017
Review of Issues during the week 19th to 26th April 2017. |
- Following the reversion of the LHCb Castor SRMs on Wednesday 12th April Castor for LHCb has worked much better although some significant problems have remained. The total load on LHCb Castor has had to be managed by LHCb and the there have been times when the SRMs clogged up requiring manual intervention to restart things. Nevertheless the appreciable backlog of LHCb stripping and merging jobs that had built up has been largely eliminated.
- There was a problem with Castor for CMS overnight last night (Tuesday/Wednesday 25/26 April) when the database reported blocking sessions. This fixed itself after a couple of hours.
- There was a problems for Atlas (as shown in the Hammercloud test results below) over the weekend when AtalsScratchDisk filled up. This space is managed by Atlas.
- We are also seeing a non-negligible failure rate for CE SAM tests for CMS that is not yet understood.
Resolved Disk Server Issues |
- GDSS722 (AtlasDataDisk - D1T0) was taken out of service on Tuesday 18th April for memory errors to be investigated. Following a swap round and re-seating of its memory it was returned to service the following day (Wednesday 19th April).
- GDSS737 (LHCbDst - D1T0) was taken out of service on Monday, 24th April after two disk drives had failed. It returned to service, intially read-only, yesterday (25th April).
Current operational status and issues |
- We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities.
Ongoing Disk Server Issues |
- GDSS729 (CMSDisk - D1T0) was taken out of service yesterday (18th April) after it became unresponsive. Investigations are ongoing.
Limits on concurrent batch system jobs. |
- CMS Multicore 550
Notable Changes made since the last meeting. |
- Atlas Pilot (Analysis) job limit of 1500 removed.
- Changed Castor non-ATLAS 'default' service classes to point to the D0T1 endpoints to better handle transfers that do not specify a service class.
Declared in the GOC DB |
None
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Pending - but not yet formally announced:
- Update Castor SRMs - CMS & GEN still to do. Problems seen with the SRM update mean these will probably wait until Castor 2.1.16 is rolled out.
- Merge AtlasScratchDisk into larger Atlas disk pool.
Listing by category:
- Castor:
- Update SRMs to new version, including updating to SL6.
- Update to version 2.1.16
- Networking
- Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
Entries in GOC DB starting since the last report. |
- None
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
127916 | Green | Alarm | In Progress | 2017-04-25 | 2017-04-25 | LHCb | RAL srm-s are down again for LHCb |
127612 | Green | Alarm | In Progress | 2017-04-08 | 2017-04-10 | LHCb | CEs at RAL not responding |
127598 | Green | Urgent | In Progress | 2017-04-07 | 2017-04-07 | CMS | UK XRootD Redirector |
127597 | Green | Urgent | In Progress | 2017-04-07 | 2017-04-10 | CMS | Check networking and xrootd RAL-CERN performance |
127388 | Green | Less Urgent | In Progress | 2017-03-29 | 2017-04-20 | LHCb | [FATAL] Connection error for some file |
127240 | Green | Urgent | In Progress | 2017-03-21 | 2017-04-05 | CMS | Staging Test at UK_RAL for Run2 |
126905 | Green | Less Urgent | Waiting Reply | 2017-03-02 | 2017-04-21 | solid | finish commissioning cvmfs server for solidexperiment.org |
124876 | Red | Less Urgent | On Hold | 2016-11-07 | 2017-01-01 | OPS | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk |
117683 | Red | Less Urgent | On Hold | 2015-11-18 | 2017-03-02 | CASTOR at RAL not publishing GLUE 2. |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC ECHO = Atlas ECHO (Template 841);CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | Atlas HC ECHO | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|---|
19/04/17 | 100 | 100 | 98 | 93 | 100 | 97 | 100 | 99 | Atlas & CMS: SRM test failures. |
20/04/17 | 100 | 100 | 94 | 98 | 84 | 97 | 100 | 100 | Atlas, CMS, & LHCb: SRM Test failures; |
21/04/17 | 100 | 100 | 87 | 100 | 92 | 100 | 96 | 100 | Atlas & LHCb: SRM Test failures |
22/04/17 | 100 | 100 | 84 | 96 | 100 | 47 | 98 | 100 | Atlas & CMS: SRM errors |
23/04/17 | 100 | 98 | 90 | 97 | 100 | 0 | 100 | 100 | Alice: Missing data; Atlas - pool full; CMS - SRM errors |
24/04/17 | 100 | 100 | 100 | 95 | 79 | 24 | 100 | 100 | SRM test failures for both VOs. |
25/04/17 | 100 | 100 | 100 | 85 | 71 | 100 | 99 | 99 | SRM test failures for both VOs. |
Notes from Meeting. |
- 1.5PBytes of Atlas data is now stored in ECHO.