RAL Tier1 Operations Report for 9th July 2018
Review of Issues during the week 2nd July to the 9th July 2018.
|
Current operational status and issues
|
- DS upgraded the RAL site connection to JANET to 100gb. The link between the new routers and the older border routers were upgraded from 40gb to 80gb to take advantage of the new site connection speed. However, during this process they appear to have broken IPv6. Early Wednesday morning we has no IPv6 connectivity out of RAL – either via the RAL core or the ‘bypass’ route.
Resolved Castor Disk Server Issues
|
Ongoing Castor Disk Server Issues
|
- gdss747 - Atlas - d1t0. Drive in intervention
Limits on concurrent batch system jobs.
|
Notable Changes made since the last meeting.
|
- Worker nodes and CONDOR. We are now running new Docker containers running Singularity. Further changes include an update to use UMD4. This new container is currently being tested and is running on arc-ce04.
- The new Docker container image (stfc/grid-workernode-c6:2018-07-09.2 and stfc/grid-workernode-c7:2018-07-09.2), has been rolled out across the entire batch farm.
Entries in GOC DB starting since the last report.
|
- No downtime scheduled in the GOCDB for next 2 weeks
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
Listing by category:
- Castor:
- Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
- Move to generic Castor headnodes.
- Internal
- DNS servers will be rolled out within the Tier1 network.
- Infrastructure
- There is currently a scheduled three day period (July 24th - July 26th), where RAL Tier-1 will be undertaking server room circuit breaker testing. Although it is believed that these tests should not effect our services, such is the nature of these tests all Tier-1 production services should be considered "At Risk" during this time. All VOs are asked to give consideration with respect to possible unexpected outage to any high priority/critical jobs they may wish to run during this period.
Open
GGUS Tickets (Snapshot taken during morning of the meeting).
|
Request id
|
Affected vo
|
Status
|
Priority
|
Date of creation
|
Last update
|
Type of problem
|
Subject
|
Scope
|
136028
|
cms
|
in progress
|
urgent
|
10/07/2018
|
11/07/2018
|
CMS_AAA WAN Access
|
Issues reading files at T1_UK_RAL_Disk
|
WLCG
|
134685
|
dteam
|
in progress
|
less urgent
|
23/04/2018
|
09/07/2018
|
Middleware
|
please upgrade perfsonar host(s) at RAL-LCG2 to CentOS7
|
EGI
|
124876
|
ops
|
reopened
|
less urgent
|
07/11/2016
|
28/06/2018
|
Operations
|
[Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
|
EGI
|
GGUS Tickets Closed Last week
|
Request id
|
Affected vo
|
Status
|
Priority
|
Date of creation
|
Last update
|
Type of problem
|
Subject
|
Scope
|
136002
|
cms
|
solved
|
urgent
|
09/07/2018
|
09/07/2018
|
CMS_Facilities
|
T1_UK_RAL SE Xrootd read failure
|
WLCG
|
135940
|
cms
|
solved
|
urgent
|
04/07/2018
|
06/07/2018
|
CMS_Data Transfers
|
Transfers failing to RAL_Disk - no data available
|
WLCG
|
135901
|
ops
|
verified
|
less urgent
|
03/07/2018
|
06/07/2018
|
Operations
|
[Rod Dashboard] Issue detected : org.bdii.GLUE2-Validate@site-bdii.gridpp.rl.ac.uk
|
EGI
|
135822
|
cms
|
solved
|
very urgent
|
26/06/2018
|
05/07/2018
|
CMS_Central Workflows
|
File Read Problems for Production at T1_UK_RAL
|
WLCG
|
135740
|
none
|
closed
|
urgent
|
20/06/2018
|
04/07/2018
|
File Access
|
CVMFS issue on RALfor /cvmfs/dune.opensciencegrid.org/
|
EGI
|
135711
|
cms
|
closed
|
urgent
|
18/06/2018
|
06/07/2018
|
CMS_Central Workflows
|
T1_UK_RAL production jobs failing
|
WLCG
|
135455
|
cms
|
closed
|
less urgent
|
31/05/2018
|
09/07/2018
|
File Transfer
|
Checksum verification at RAL
|
EGI
|
135342
|
ops
|
verified
|
less urgent
|
27/05/2018
|
02/07/2018
|
Operations
|
[Rod Dashboard] Issue detected : egi.eu.lowAvailability-/RAL-LCG2@RAL-LCG2_Availability
|
EGI
|
135293
|
ops
|
verified
|
less urgent
|
23/05/2018
|
02/07/2018
|
Operations
|
[Rod Dashboard] Issues detected at RAL-LCG2
|
EGI
|
Target Availability for each site is 97.0%
|
Red <90%
|
Orange <97%
|
Day
|
Atlas
|
Atlas-Echo
|
CMS
|
LHCB
|
Alice
|
OPS
|
Comments
|
2018-07-03
|
100
|
100
|
100
|
100
|
100
|
100
|
|
2018-07-04
|
100
|
100
|
87
|
100
|
100
|
100
|
|
2018-07-05
|
100
|
100
|
100
|
100
|
100
|
100
|
|
2018-07-06
|
100
|
100
|
99
|
100
|
100
|
100
|
|
2018-07-07
|
100
|
100
|
83
|
100
|
100
|
100
|
|
2018-07-08
|
100
|
100
|
89
|
100
|
100
|
96.5
|
|
2018-07-09
|
100
|
100
|
100
|
100
|
100
|
100
|
|
Target Availability for each site is 97.0%
|
Red <90%
|
Orange <97%
|
Day |
Atlas HC |
CMS HC |
Comment
|
2018/07/03 |
100 |
98 |
|
2018/07/04 |
98 |
100 |
|
2018/07/05 |
98 |
100 |
|
2018/07/06 |
100 |
100 |
|
2018/07/07 |
93 |
99 |
|
2018/07/08 |
94 |
99 |
|
2018/07/09 |
100 |
100 |
|
Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud
- GGUS ticket this morning ref xrootd - failing request were hitting the external gateway - it shouldn't.
- Lots of transfers with similar failures. All failing requests hitting xrootd servers are from CMs - None from Atlas.
- Not enough resource to handle requests.
- 1. If they talk to the internal gateway will be redirected to the external - though this is not confirmed
- 2. Mapping within the node/docker container is failing. A failing config somewhere, but TB doesn't think it's our fault as Atlas is running ok for the same config.