Difference between revisions of "Tier1 Operations Report 2017-05-03"
From GridPP Wiki
(→) |
(→) |
||
Line 10: | Line 10: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 26th April to 3rd May 2017. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 26th April to 3rd May 2017. | ||
|} | |} | ||
− | * LHCb have managed to complete their stripping and merging campaign for data stored at RAL. Nevertheless we still need to follow up with improvements to Castor to resolve throughput and stability issues. | + | * LHCb have managed to complete their stripping and merging campaign for data stored at RAL. Nevertheless we still need to follow up with improvements to Castor to resolve throughput and stability issues. The SRMs required a restart on Friday aftternoon (28th April). |
* AtlasDataDisk became full. This severely affected the Atlas SAM tests and hence availability. Investigations show problems with Castor's file deletion rate. Once the disk is full any attempted writes are subsequently cleaned up (- i.e. a delete request). These deletions take all of Castor's capacity to delete files. The result is that once a disk area fills the problem becomes a severe fault and is only resolved by the VO stopping attempts to write data - allowing some level of deletions to happen. However, it is clear that the deletion rate was not sufficient at times before the disk became full - although this is not understood. | * AtlasDataDisk became full. This severely affected the Atlas SAM tests and hence availability. Investigations show problems with Castor's file deletion rate. Once the disk is full any attempted writes are subsequently cleaned up (- i.e. a delete request). These deletions take all of Castor's capacity to delete files. The result is that once a disk area fills the problem becomes a severe fault and is only resolved by the VO stopping attempts to write data - allowing some level of deletions to happen. However, it is clear that the deletion rate was not sufficient at times before the disk became full - although this is not understood. | ||
* There was a problem with Castor for CMS overnight last night (Tuesday/Wednesday 25/26 April) when the database reported blocking sessions. This fixed itself after a couple of hours. | * There was a problem with Castor for CMS overnight last night (Tuesday/Wednesday 25/26 April) when the database reported blocking sessions. This fixed itself after a couple of hours. |
Revision as of 11:25, 3 May 2017
RAL Tier1 Operations Report for 3rd May 2017
Review of Issues during the week 26th April to 3rd May 2017. |
- LHCb have managed to complete their stripping and merging campaign for data stored at RAL. Nevertheless we still need to follow up with improvements to Castor to resolve throughput and stability issues. The SRMs required a restart on Friday aftternoon (28th April).
- AtlasDataDisk became full. This severely affected the Atlas SAM tests and hence availability. Investigations show problems with Castor's file deletion rate. Once the disk is full any attempted writes are subsequently cleaned up (- i.e. a delete request). These deletions take all of Castor's capacity to delete files. The result is that once a disk area fills the problem becomes a severe fault and is only resolved by the VO stopping attempts to write data - allowing some level of deletions to happen. However, it is clear that the deletion rate was not sufficient at times before the disk became full - although this is not understood.
- There was a problem with Castor for CMS overnight last night (Tuesday/Wednesday 25/26 April) when the database reported blocking sessions. This fixed itself after a couple of hours.
- There is an ongoing problem with one of the argus servers. This has affected CMS CE SAM tests (and hence availability) since yesterday (Tuesday) evening. Work is ongoing at the time of the meeting.
- We are also seeing a non-negligible failure rate for CE SAM tests for CMS that is not yet understood. CMS are experimented in running with 'lazy download' disabled.
- There is a problem with the UPS system in the R89 computer problem. Internal capacitors overheated last Friday (28th April) and we are currently running in Mains by-Pass without UPS protection.
Resolved Disk Server Issues |
- GDSS722 (AtlasDataDisk - D1T0) was taken out of service on Tuesday 18th April for memory errors to be investigated. Following a swap round and re-seating of its memory it was returned to service the following day (Wednesday 19th April).
- GDSS737 (LHCbDst - D1T0) was taken out of service on Monday, 24th April after two disk drives had failed. It returned to service, intially read-only, yesterday (25th April).
Current operational status and issues |
- We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities.
Ongoing Disk Server Issues |
- GDSS729 (CMSDisk - D1T0) was taken out of service yesterday (18th April) after it became unresponsive. Investigations are ongoing.
Limits on concurrent batch system jobs. |
- CMS Multicore 550
Notable Changes made since the last meeting. |
- Updated GridFTP / XRootD plugins on ECHO Ceph gateways changing the way checksum is stored.
Declared in the GOC DB |
None
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Pending - but not yet formally announced:
- Update Castor SRMs - CMS & GEN still to do. Problems seen with the SRM update mean these will probably wait until Castor 2.1.16 is rolled out.
- Merge AtlasScratchDisk into larger Atlas disk pool.
Listing by category:
- Castor:
- Update SRMs to new version, including updating to SL6.
- Update to version 2.1.16
- Networking
- Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
Entries in GOC DB starting since the last report. |
- None
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
128007 | Green | In Progress | 2017-04-20 | 2017-05-02 | CMS | T1_UK_RAL, SAM tests failing | |
127968 | Green | Less Urgent | In Progress | 2017-0 | 2017-0 | MICE | RAL castor: not able to list directories and copy to |
127967 | Green | Less Urgent | On Hold | 2017-04-27 | 2017-04-28 | MICE | Enabling pilot role for mice VO at RAL-LCG2 |
127929 | Green | In Progress | In Progress | 2017-04-26 | 2017-04-27 | CMS | T1_UK_RAL, SAM tests failing |
127916 | Green | Alarm | In Progress | 2017-04-25 | 2017-04-25 | LHCb | RAL srm-s are down again for LHCb |
127612 | Red | Alarm | In Progress | 2017-04-08 | 2017-04-10 | LHCb | CEs at RAL not responding |
127598 | Green | Urgent | In Progress | 2017-04-07 | 2017-04-07 | CMS | UK XRootD Redirector |
127597 | Green | Urgent | In Progress | 2017-04-07 | 2017-04-10 | CMS | Check networking and xrootd RAL-CERN performance |
127388 | Green | Less Urgent | In Progress | 2017-03-29 | 2017-05-02 | LHCb | [FATAL] Connection error for some file |
127240 | Yellow | Urgent | In Progress | 2017-03-21 | 2017-04-05 | CMS | Staging Test at UK_RAL for Run2 |
126905 | Green | Less Urgent | Waiting Reply | 2017-03-02 | 2017-04-21 | solid | finish commissioning cvmfs server for solidexperiment.org |
124876 | Red | Less Urgent | On Hold | 2016-11-07 | 2017-01-01 | OPS | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk |
117683 | Red | Less Urgent | On Hold | 2015-11-18 | 2017-03-02 | CASTOR at RAL not publishing GLUE 2. |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC ECHO = Atlas ECHO (Template 841);CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | Atlas HC ECHO | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|---|
26/04/17 | 100 | 100 | 90 | 82 | 92 | 22 | 92 | 99 | All SRM test failures: Atlas - unknown errors. CMS - SRM failures; LHCb - SRM test failures. |
27/04/17 | 100 | 100 | 85 | 86 | 92 | 51 | 99 | 100 | All SRM test failures: Atlas - unknown errors. CMS - SRM failures and batch job failures. LHCb - SRM test failures. |
28/04/17 | 100 | 100 | 96 | 81 | 97 | 93 | 100 | N/A | All SRM test failures: Atlas - unknown errors. CMS: SRM test failures. LHCb - SRM test failures. |
29/04/17 | 100 | 100 | 85 | 79 | 99 | 99 | 100 | N/A | All SRM test failures: Atlas - unknown errors. CMS: SRM test failures. LHCb - SRM test failure. |
30/04/17 | 100 | 100 | 6 | 63 | 100 | 97 | 100 | 91 | Atlas - disk full. CMS: SRM test failures. |
01/05/17 | 100 | 100 | 0 | 68 | 96 | 97 | 100 | 100 | Atlas - disk full. CMS: SRM test failures. LHCb - SRM test failures. |
02/05/17 | 100 | 100 | 58 | 74 | 96 | 98 | 98 | 99 | Atlas: SRM test failures as DataDisk full; CMS: CE test failures owing to glexec (argus) problem; LHCb: Single SRM test failure. |
Notes from Meeting. |
- None yet