|
|
Line 316: |
Line 316: |
| |} | | |} |
| * The Dell '16 batch of storage for Echo (CEPH) has almost completed its testing with the Fabric Team. It will then be handed over to the Echo team for installation into the development CEPH cluster for further testing. | | * The Dell '16 batch of storage for Echo (CEPH) has almost completed its testing with the Fabric Team. It will then be handed over to the Echo team for installation into the development CEPH cluster for further testing. |
− | * There was a discussion about the testing of disk servers that have stood idle before being installed into Echo (Ceph). The servers currently giving problems were put into a Ceph test area and exercised. However, this was clearly insufficient to shake out enough of the disk problems. | + | * There was a discussion about the testing of disk servers that have stood idle before being installed into Echo (Ceph). The servers currently giving problems were put into a Ceph test area and exercised. However, this was clearly insufficient to shake out enough of the disk problems and the process will be re-visited. |
| * Discussions are underway with ALICE about using Echo. | | * Discussions are underway with ALICE about using Echo. |
| * We will work with CMS to find the 'dark data' that is in Castor but not known to CMS. This would fix the problems of CMSDisk filling. | | * We will work with CMS to find the 'dark data' that is in Castor but not known to CMS. This would fix the problems of CMSDisk filling. |
Latest revision as of 15:50, 20 September 2017
RAL Tier1 Operations Report for 20th September 2017
Review of Issues during the fortnight 6th to 20th September 2017.
|
- The better understanding of CEPH problems with Echo has provided stability. However, careful management of the disks in servers recently added into Echo has been needed. This has slowed the rate at which the new hardware is being brought into full use.
- We have seen problems with the LHCb SRM SAM tests since the 8th September. However, LHCb are not seeing operational problems.
- The CMSDisk area in Castor became full over the weekend 9/10 Sep. This caused SAM test failures.
- There have been three cases where the network link to one of the batches of worker nodes (Dell '16) has dropped out and needed resetting. Switch firmware updates were applied yesterday (19th Sep) to try and fix this.
- There was a problem with the LFC service. A new node was added to the alias but it was not configured to support dteam VO. This was corrected in response to a GGUS ticket.
Resolved Disk Server Issues
|
- GDSS762 (AtlasDataDisk - D1T0) Failed on Wednesday evening 6th Sep. It was returned to service on the Friday (8th) although no cause for the failure was identified.
- GDSS732 (LHCbDst - D1T0) failed on Sunday 10th Sep. It would not boot and the OS was re-installed. It was returned to service on Tuesday 12th Sep. Two files were lost.
- GDSS769 (AtlasDataDisk - D1T0) Also failed on Sunday 10th Sep. It was returned to service the following day (11th) although no cause for the failure was identified.
- GDSS776 (LHCbDst - D1T0) failed on Monday 11th Sep. It was returned to service the following day (12th). One faulty disk drive was replaced.
- GDSS748 (AtlasDataDisk - D1T0) failed on Sunday 17th Sep. It was returned to service at the following afternoon (18th). It had reported read-only file systems and the OS needed to be reinstalled. Four files in flight at the time of the failure were declared lost to Atlas.
Current operational status and issues
|
- The high rate of SRM SAM failures for CMS stopped in mid-August although why this improvement came about is not understood. However, we will no longer track this issue here.
- There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. This is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Ongoing Disk Server Issues
|
Limits on concurrent batch system jobs.
|
Notable Changes made since the last meeting.
|
- Re-deployed six disk servers from 2012 batches into lhcbRawRdst (D0-T1).
- Enabled access to Castor for the SOLID experiment
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
Pending - but not yet formally announced:
- Upgrade the FTS3 service to a version that will no longer support the SOAP interface. The SOAP interface was disabled on Monday (17th July).
- Re-distribute the data in Echo onto the 2015 capacity hardware. (Ongoing)
Listing by category:
- Castor:
- Move to generic Castor headnodes.
- Echo:
- Re-distribute the data in Echo onto the remaining 2015 capacity hardware.
- Networking
- Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, all squids and the CVMFS Stratum-1 servers).
- Services
- The production FTS will be updated now the requirement to support the deprecated SOAP interface has been removed.
Entries in GOC DB starting since the last report.
|
Open GGUS Tickets (Snapshot during morning of meeting)
|
GGUS ID |
Level |
Urgency |
State |
Creation |
Last Update |
VO |
Subject
|
130573
|
Green
|
Urgent
|
In Progress
|
2017-09-14
|
2017-09-18
|
CMS
|
Possible missing files at T1_UK_RAL_Disk
|
130537
|
Green
|
Less Urgent
|
In Progress
|
2017-09-13
|
2017-09-13
|
OPS
|
[Rod Dashboard] Issue detected : org.sam.SRM-GetSURLs-ops@srm-solid.gridpp.rl.ac.uk
|
130467
|
Green
|
Urgent
|
Waiting for Reply
|
2017-09-10
|
2017-09-10
|
CMS
|
T1_UK_RAL has SAM3 CE and SRM critical > 2hours
|
130207
|
Green
|
Urgent
|
In Progress
|
2017-08-24
|
2017-09-06
|
MICE
|
Timeouts when copying MICE reco data to CASTOR
|
130193
|
Green
|
Urgent
|
In Progress
|
2017-08-23
|
2017-09-04
|
CMS
|
Staging from RAL tape systems
|
128991
|
Green
|
Less Urgent
|
Waiting for Reply
|
2017-06-16
|
2017-09-18
|
Solid
|
solidexperiment.org CASTOR tape support
|
127597
|
Red
|
Urgent
|
On Hold
|
2017-04-07
|
2017-06-14
|
CMS
|
Check networking and xrootd RAL-CERN performance
|
124876
|
Red
|
Less Urgent
|
On Hold
|
2016-11-07
|
2017-01-01
|
OPS
|
[Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
|
117683
|
Red
|
Less Urgent
|
On Hold
|
2015-11-18
|
2017-07-06
|
|
CASTOR at RAL not publishing GLUE 2.
|
Day |
OPS |
Alice |
Atlas |
CMS |
LHCb |
Atlas Echo |
Comment
|
06/09/17 |
100 |
100 |
100 |
100 |
100 |
100 |
|
07/09/17 |
100 |
100 |
100 |
100 |
100 |
100 |
|
08/09/17 |
100 |
100 |
100 |
97 |
88 |
100 |
CMS: Disk full; LHCb: [SRM_INVALID_PATH] No such file or directory
|
09/09/17 |
100 |
100 |
100 |
42 |
88 |
100 |
CMS: Disk full; LHCb: [SRM_INVALID_PATH] No such file or directory
|
10/09/17 |
100 |
100 |
100 |
56 |
75 |
100 |
CMS: Disk full; LHCb: [SRM_INVALID_PATH] No such file or directory
|
11/09/17 |
100 |
100 |
98 |
100 |
83 |
100 |
Atlas: Single test failure: could not open connection to srm-atlas.gridpp.rl.ac.uk:8443; LHCb: [SRM_INVALID_PATH] No such file or directory
|
12/09/17 |
100 |
100 |
98 |
100 |
84 |
100 |
Atlas: Single test failure: error 500 Command failed. : an unknown error occurred; LHCb: [SRM_INVALID_PATH] No such file or directory
|
13/09/17 |
100 |
100 |
100 |
100 |
76 |
100 |
[SRM_INVALID_PATH] No such file or directory
|
14/09/17 |
100 |
100 |
100 |
100 |
91 |
100 |
[SRM_INVALID_PATH] No such file or directory
|
15/09/17 |
100 |
100 |
100 |
100 |
72 |
100 |
[SRM_INVALID_PATH] No such file or directory
|
16/09/17 |
100 |
100 |
100 |
100 |
83 |
100 |
[SRM_INVALID_PATH] No such file or directory
|
17/09/17 |
100 |
100 |
100 |
100 |
88 |
100 |
[SRM_INVALID_PATH] No such file or directory
|
18/09/17 |
100 |
100 |
100 |
100 |
67 |
100 |
[SRM_INVALID_PATH] No such file or directory
|
19/09/17 |
100 |
100 |
100 |
100 |
84 |
100 |
[SRM_INVALID_PATH] No such file or directory
|
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud
Day |
Atlas HC |
Atlas HC Echo |
CMS HC |
Comment
|
06/09/17 |
100 |
100 |
99 |
|
07/09/17 |
100 |
100 |
100 |
|
08/09/17 |
99 |
100 |
100 |
|
09/09/17 |
100 |
100 |
100 |
|
10/09/17 |
99 |
100 |
100 |
|
11/09/17 |
100 |
100 |
100 |
|
12/09/17 |
100 |
100 |
100 |
|
13/09/17 |
96 |
96 |
100 |
|
14/09/17 |
88 |
100 |
99 |
|
15/09/17 |
100 |
100 |
99 |
|
16/09/17 |
100 |
100 |
100 |
|
17/09/17 |
98 |
100 |
100 |
|
18/09/17 |
85 |
89 |
100 |
|
19/09/17 |
97 |
100 |
100 |
|
- The Dell '16 batch of storage for Echo (CEPH) has almost completed its testing with the Fabric Team. It will then be handed over to the Echo team for installation into the development CEPH cluster for further testing.
- There was a discussion about the testing of disk servers that have stood idle before being installed into Echo (Ceph). The servers currently giving problems were put into a Ceph test area and exercised. However, this was clearly insufficient to shake out enough of the disk problems and the process will be re-visited.
- Discussions are underway with ALICE about using Echo.
- We will work with CMS to find the 'dark data' that is in Castor but not known to CMS. This would fix the problems of CMSDisk filling.