|
|
Line 11: |
Line 11: |
| | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 30th July to the 6th August 2018. | | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 30th July to the 6th August 2018. |
| |} | | |} |
− | | + | * Although a relatively quiet week after the fun and games of the previous weeks power testing there were a couple of points of interest. The Production team experienced the intermittent working of its paging system (the cause was finally found to be a faulty SIM). This meant that there was no guarantee that call-outs were being sent to the relevant Duty Admin or On-Call. Despite this, periodic checking (at least every 2 hours), of mail and the RT queues did ensure that no call-outs were missed for longer than 2 hours i.e. within our accepted SLA. |
− | *Every circuit breaker located in Tier-1's R89 building was tested between Tuesday 24th - Thursday 26th July. This testing period was completed successfully. The majority of racks were powered by dual PDUs which were on different circuit breakers and so theoretically, nothing should have gone down. In reality 23 PDUs failed over the 3 day period. This was within the expected number of failures although it was at the highest end of our estimates. We had sufficient replacement PDUs. | + | * LHCb suffered a 2 servers outage over the weekend although they are now back in production. CMS continued to have their on-going SAM test issues (however, at time of writing these have been resolved). Finally, the Docker xrootd containers have been updated to use the Luminous client. |
− | *Impact to Tier-1:
| + | |
− | ** 2 Production Castor disk servers went down with temporary lost of access to some file access.
| + | |
− | ** Castor transfer manager went down for ATLAS, lost of ability to schedule new transfers for ~20 minutes.
| + | |
− | ** A few SL5 machines also went down. No impact on service but a useful reminder to get them decommissioned properly.
| + | |
− | ** It should be worth noting that two Echo storage nodes went down with no impact on service. This happened because their power cables weren’t correctly fitted. | + | |
− | ** GOCDB service was down for 45 minutes. It wasn’t directly a result of the power testing it was failed network switch (GOCDB is not on Tier-1 network, so this failure did not impact the Tier-1). The read only version (in Germany) stayed up. EGI broadcast was sent out a few minutes after the service went down.
| + | |
| <!-- ***********End Review of Issues during last week*********** -----> | | <!-- ***********End Review of Issues during last week*********** -----> |
| <!-- *********************************************************** -----> | | <!-- *********************************************************** -----> |
Revision as of 07:39, 8 August 2018
RAL Tier1 Operations Report for 6th August 2018
Review of Issues during the week 30th July to the 6th August 2018.
|
- Although a relatively quiet week after the fun and games of the previous weeks power testing there were a couple of points of interest. The Production team experienced the intermittent working of its paging system (the cause was finally found to be a faulty SIM). This meant that there was no guarantee that call-outs were being sent to the relevant Duty Admin or On-Call. Despite this, periodic checking (at least every 2 hours), of mail and the RT queues did ensure that no call-outs were missed for longer than 2 hours i.e. within our accepted SLA.
- LHCb suffered a 2 servers outage over the weekend although they are now back in production. CMS continued to have their on-going SAM test issues (however, at time of writing these have been resolved). Finally, the Docker xrootd containers have been updated to use the Luminous client.
Current operational status and issues
|
Resolved Castor Disk Server Issues
|
Ongoing Castor Disk Server Issues
|
- gdss747 - Atlas - d1t0 - atlasStripInput : Currently in intervention.
- gdss737 - LHCb - d1t0 - lhcbDst : Currently in intervention.
- gdss771 - LHCb - d1t0 - lhcbDst : Currently in intervention.
Limits on concurrent batch system jobs.
|
Notable Changes made since the last meeting.
|
Entries in GOC DB starting since the last report.
|
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
Listing by category:
- Castor:
- Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
- Move to generic Castor headnodes.
- Internal
- DNS servers will be rolled out within the Tier1 network.
Open
GGUS Tickets (Snapshot taken during morning of the meeting).
|
Request id
|
Affected vo
|
Status
|
Priority
|
Date of creation
|
Last update
|
Type of problem
|
Subject
|
Scope
|
136563
|
cms
|
in progress
|
urgent
|
06/08/2018
|
06/08/2018
|
CMS_Data Transfers
|
Possibly corrupted files at RAL
|
WLCG
|
136366
|
mice
|
in progress
|
less urgent
|
25/07/2018
|
26/07/2018
|
Local Batch System
|
Remove MICE Queue from RAL T1 Batch
|
EGI
|
136358
|
cms
|
on hold
|
urgent
|
25/07/2018
|
03/08/2018
|
CMS_Facilities
|
T1_UK_RAL WN-xrootd-access failure
|
WLCG
|
136199
|
lhcb
|
in progress
|
very urgent
|
18/07/2018
|
06/08/2018
|
File Transfer
|
Lots of submitted transfers on RAL FTS
|
WLCG
|
136028
|
cms
|
in progress
|
top priority
|
10/07/2018
|
06/08/2018
|
CMS_AAA WAN Access
|
Issues reading files at T1_UK_RAL_Disk
|
WLCG
|
134685
|
dteam
|
waiting for reply
|
less urgent
|
23/04/2018
|
06/08/2018
|
Middleware
|
please upgrade perfsonar host(s) at RAL-LCG2 to CentOS7
|
EGI
|
124876
|
ops
|
in progress
|
less urgent
|
07/11/2016
|
23/07/2018
|
Operations
|
[Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
|
EGI
|
GGUS Tickets Closed Last week
|
Request id
|
Affected vo
|
Status
|
Priority
|
Date of creation
|
Last update
|
Type of problem
|
Subject
|
Scope
|
136537
|
ops
|
verified
|
less urgent
|
03/08/2018
|
06/08/2018
|
Operations
|
[Rod Dashboard] Issue detected : org.bdii.GLUE2-Validate@site-bdii.gridpp.rl.ac.uk
|
EGI
|
136460
|
cms
|
solved
|
urgent
|
30/07/2018
|
01/08/2018
|
CMS_Data Transfers
|
Transfers failing to RAL_Buffer
|
WLCG
|
136408
|
cms
|
solved
|
urgent
|
27/07/2018
|
01/08/2018
|
CMS_Data Transfers
|
missing files at RAL
|
WLCG
|
136229
|
cms
|
closed
|
very urgent
|
19/07/2018
|
02/08/2018
|
Data Management - generic
|
RAL FTS Service not reachable via IPv6
|
EGI
|
136138
|
t2k.org
|
verified
|
urgent
|
16/07/2018
|
02/08/2018
|
File Access
|
Extremely long download times for T2K files on tape at RALL - Part 2
|
EGI
|
136110
|
atlas
|
closed
|
urgent
|
13/07/2018
|
31/07/2018
|
File Transfer
|
RAL-LCG2: Transfer errors as source with "SRM_FILE_UNAVAILABLE"
|
WLCG
|
136097
|
other
|
closed
|
urgent
|
13/07/2018
|
03/08/2018
|
Operations
|
Please restart frontier-squid on RAL cvmfs stratum 1
|
EGI
|
Target Availability for each site is 97.0%
|
Red <90%
|
Orange <97%
|
Day
|
Atlas
|
Atlas-Echo
|
CMS
|
LHCB
|
Alice
|
OPS
|
Comments
|
2018-07-30
|
100
|
100
|
100
|
100
|
100
|
100
|
|
2018-07-31
|
100
|
100
|
99
|
100
|
100
|
100
|
|
2018-08-01
|
100
|
100
|
100
|
100
|
100
|
100
|
|
2018-08-02
|
100
|
100
|
100
|
100
|
100
|
100
|
|
2018-08-03
|
100
|
100
|
100
|
100
|
100
|
100
|
|
2018-08-04
|
100
|
100
|
98
|
100
|
100
|
100
|
|
2018-08-05
|
100
|
100
|
99
|
100
|
100
|
100
|
|
2018-08-06
|
100
|
100
|
99
|
100
|
100
|
100
|
|
Target Availability for each site is 97.0%
|
Red <90%
|
Orange <97%
|
Day |
Atlas HC |
CMS HC |
Comment
|
2018-07-31 |
91 |
96 |
|
2018-07-01 |
100 |
98 |
|
2018-07-02 |
100 |
99 |
|
2018-07-03 |
100 |
99 |
|
2018-07-04 |
98 |
99 |
|
2018-07-05 |
100 |
98 |
|
2018-07-06 |
100 |
98 |
|
Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud