RAL Tier1 Operations Report for 2nd July 2018
Review of Issues during the week 25th June to the 2nd July 2018.
|
- There is currently a scheduled three day period (July 24th - July 26th), where RAL Tier-1 will be undertaking server room circuit breaker testing. Although it is believed that these tests should not effect our services, such is the nature of these tests, all Tier-1 production services should be considered "At Risk" during this time. VOs are asked to give consideration with respect to possible unexpected outage to any high priority/critical jobs they may wish to run during this period.
- Tier-1 are currently undertaking a campaign of better monitoring of our smaller VOs and trying to provide a proactive approach to VOs usage (and where required, intervention).
Current operational status and issues
|
Resolved Castor Disk Server Issues
|
Ongoing Castor Disk Server Issues
|
Limits on concurrent batch system jobs.
|
Notable Changes made since the last meeting.
|
Entries in GOC DB starting since the last report.
|
- No downtime scheduled in the GOCDB for next 2 weeks
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
Listing by category:
- Castor:
- Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
- Move to generic Castor headnodes.
- Networking
- Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
- Internal
- DNS servers will be rolled out within the Tier1 network.
- Infrastructure
- Testing of power distribution boards in the R89 machine room is being scheduled for 24th – 26th July. The effect of this on our services is anticipated to be minimal.
Open
GGUS Tickets (Snapshot taken during morning of the meeting).
|
Request id
|
Affected vo
|
Status
|
Priority
|
Date of creation
|
Last update
|
Type of problem
|
Subject
|
Scope
|
135822
|
cms
|
in progress
|
very urgent
|
26/06/2018
|
29/06/2018
|
CMS_Central Workflows
|
File Read Problems for Production at T1_UK_RAL
|
WLCG
|
134685
|
dteam
|
in progress
|
less urgent
|
23/04/2018
|
11/06/2018
|
Middleware
|
please upgrade perfsonar host(s) at RAL-LCG2 to CentOS7
|
EGI
|
124876
|
ops
|
reopened
|
less urgent
|
07/11/2016
|
28/06/2018
|
Operations
|
[Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
|
EGI
|
GGUS Tickets Closed Last week
|
Request id
|
Affected vo
|
Status
|
Priority
|
Date of creation
|
Last update
|
Type of problem
|
Subject
|
Scope
|
135723
|
lhcb
|
solved
|
top priority
|
19/06/2018
|
28/06/2018
|
File Transfer
|
lcgfts3 FTS server fails all transfers
|
WLCG
|
135661
|
atlas
|
closed
|
less urgent
|
14/06/2018
|
28/06/2018
|
Databases
|
RAL-LCG2: ATLAS RAL Frontier server down
|
WLCG
|
135455
|
cms
|
solved
|
less urgent
|
31/05/2018
|
25/06/2018
|
File Transfer
|
Checksum verification at RAL
|
EGI
|
135367
|
snoplus.snolab.ca
|
closed
|
less urgent
|
28/05/2018
|
27/06/2018
|
Other
|
Lost access to srm-snoplus.gridpp.rl.ac.uk
|
EGI
|
135308
|
mice
|
closed
|
top priority
|
24/05/2018
|
27/06/2018
|
Information System
|
Can't send data to RAL Castor
|
EGI
|
135293
|
ops
|
solved
|
less urgent
|
23/05/2018
|
28/06/2018
|
Operations
|
[Rod Dashboard] Issues detected at RAL-LCG2
|
EGI
|
Target Availability for each site is 97.0%
|
Red <90%
|
Orange <97%
|
Day
|
Atlas
|
Atlas-Echo
|
CMS
|
LHCB
|
Alice
|
OPS
|
Comments
|
2018-06-25
|
98
|
98
|
99
|
100
|
100
|
100
|
|
2018-06-26
|
100
|
100
|
98
|
100
|
100
|
100
|
|
2018-06-27
|
100
|
100
|
100
|
100
|
100
|
100
|
|
2018-06-28
|
100
|
100
|
100
|
100
|
100
|
100
|
|
2018-06-29
|
100
|
100
|
99
|
100
|
100
|
100
|
|
2018-06-30
|
100
|
100
|
100
|
100
|
100
|
100
|
|
2018-07-01
|
100
|
100
|
100
|
100
|
100
|
100
|
|
2018-07-02
|
-
|
-
|
-
|
100
|
100
|
100
|
|
Target Availability for each site is 97.0%
|
Red <90%
|
Orange <97%
|
Day |
Atlas HC |
CMS HC |
Comment
|
2018/06/19 |
89 |
99 |
|
2018/06/20 |
100 |
83 |
|
2018/06/21 |
92 |
97 |
|
2018/06/22 |
85 |
100 |
|
2018/06/23 |
98 |
100 |
|
2018/06/24 |
93 |
100 |
|
2018/06/25 |
- |
- |
|
Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud
- GGUS ticket this morning ref xrootd - failing request were hitting the external gateway - it shouldn't.
- Lots of transfers with similar failures. All failing requests hitting xrootd servers are from CMs - None from Atlas.
- Not enough resource to handle requests.
- - 1. If they talk to the internal gateway will be redirected to the external - though this is not confirmed
- - 2. Mapping within the node/docker container is failing. A failing config somewhere, but TB doesn't think it's our fault as Atlas is running ok for the same config.