|
|
Line 34: |
Line 34: |
| | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Castor Disk Server Issues | | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Castor Disk Server Issues |
| |} | | |} |
− | * gdss746 - ATLAS - atlasStripInut - d1t0. Back in production. | + | * gdss746 - ATLAS - atlasStripInut - d1t0. Back in full production RW. |
− | * gdss685 - ATLAS - atlasTape - d1t0. Back in production. | + | * gdss685 - ATLAS - atlasTape - d1t0. Back in full production RW. |
− | * gdss776 - LHCb - LHCb_FAILOVER,LHCb-Disk - d1t0. Back in production RO | + | * gdss776 - LHCb - LHCb_FAILOVER,LHCb-Disk - d1t0. Currently back in production RO. |
| <!-- ***************************************************** -----> | | <!-- ***************************************************** -----> |
| | | |
Revision as of 09:38, 25 June 2018
RAL Tier1 Operations Report for 25th June 2018
Review of Issues during the week 18th June to the 25th June 2018.
|
- 12/6/18 there was an internal network problem, which took around ~3 hours to resolve. This seemed to mostly affect facilities rather than Tier-1 services. The concern is not that a piece of hardware failed, but that it doesn’t always switch seamlessly to the backup links (especially for IPv6).
- 19/6/18 09:00 - 13:00 IPv6 was not available. Fabric attempted a route switch failed. Should have dropped to IPv4 but didn't. Need to raise this issue with Networking/Fabric to try and resolve once and for all
Current operational status and issues
|
Resolved Castor Disk Server Issues
|
- gdss746 - ATLAS - atlasStripInut - d1t0. Back in full production RW.
- gdss685 - ATLAS - atlasTape - d1t0. Back in full production RW.
- gdss776 - LHCb - LHCb_FAILOVER,LHCb-Disk - d1t0. Currently back in production RO.
Ongoing Castor Disk Server Issues
|
Limits on concurrent batch system jobs.
|
Notable Changes made since the last meeting.
|
Entries in GOC DB starting since the last report.
|
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
Listing by category:
- Castor:
- Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
- Move to generic Castor headnodes.
- Networking
- Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
- Internal
- DNS servers will be rolled out within the Tier1 network.
- Infrastructure
- Testing of power distribution boards in the R89 machine room is being scheduled for 24th – 26th July. The effect of this on our services is anticipated to be minimal.
Open
GGUS Tickets (Snapshot taken during morning of the meeting).
|
Request id
|
Affected vo
|
Status
|
Priority
|
Date of creation
|
Last update
|
Type of problem
|
Subject
|
Scope
|
135455
|
cms
|
in progress
|
less urgent
|
31/05/2018
|
04/06/2018
|
File Transfer
|
Checksum verification at RAL
|
EGI
|
135293
|
ops
|
on hold
|
less urgent
|
23/05/2018
|
04/06/2018
|
Operations
|
[Rod Dashboard] Issues detected at RAL-LCG2
|
EGI
|
134685
|
dteam
|
in progress
|
less urgent
|
23/04/2018
|
11/06/2018
|
Middleware
|
please upgrade perfsonar host(s) at RAL-LCG2 to CentOS7
|
EGI
|
124876
|
ops
|
on hold
|
less urgent
|
07/11/2016
|
13/11/2017
|
Operations
|
[Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
|
EGI
|
GGUS Tickets Closed Last week
|
Request id
|
Affected vo
|
Status
|
Priority
|
Date of creation
|
Last update
|
Type of problem
|
Subject
|
Scope
|
135740
|
none
|
solved
|
urgent
|
20/06/2018
|
20/06/2018
|
File Access
|
CVMFS issue on RALfor /cvmfs/dune.opensciencegrid.org/
|
EGI
|
135723
|
lhcb
|
solved
|
top priority
|
19/06/2018
|
19/06/2018
|
File Transfer
|
lcgfts3 FTS server fails all transfers
|
WLCG
|
135711
|
cms
|
solved
|
urgent
|
18/06/2018
|
22/06/2018
|
CMS_Central Workflows
|
T1_UK_RAL production jobs failing
|
WLCG
|
135516
|
cms
|
closed
|
urgent
|
06/06/2018
|
22/06/2018
|
CMS_Facilities
|
T1_UK_RAL WN-analysis and HammerCloud failures
|
WLCG
|
135133
|
cms
|
closed
|
urgent
|
15/05/2018
|
19/06/2018
|
CMS_Data Transfers
|
Likely corrupted File at T1_UK_RAL
|
WLCG
|
134703
|
cms
|
closed
|
urgent
|
23/04/2018
|
21/06/2018
|
CMS_Data Transfers
|
Transfer failing from RAL_Disk
|
WLCG
|
127597
|
cms
|
closed
|
urgent
|
07/04/2017
|
21/06/2018
|
File Transfer
|
Check networking and xrootd RAL-CERN performance
|
EGI
|
Target Availability for each site is 97.0%
|
Red <90%
|
Orange <97%
|
Day
|
Atlas
|
Atlas-Echo
|
CMS
|
LHCB
|
Alice
|
OPS
|
Comments
|
2018-06-18
|
100
|
100
|
100
|
100
|
100
|
100
|
|
2018-06-19
|
99
|
99
|
99
|
100
|
100
|
100
|
|
2018-06-20
|
100
|
100
|
96
|
100
|
100
|
100
|
|
2018-06-21
|
100
|
100
|
93
|
100
|
100
|
100
|
|
2018-06-22
|
100
|
100
|
85
|
100
|
100
|
100
|
|
2018-06-23
|
100
|
100
|
100
|
100
|
100
|
100
|
|
2018-06-24
|
100
|
100
|
100
|
100
|
100
|
100
|
|
2018-06-25
|
100
|
100
|
100
|
100
|
100
|
100
|
|
Target Availability for each site is 97.0%
|
Red <90%
|
Orange <97%
|
Day |
Atlas HC |
CMS HC |
Comment
|
2018/06/19 |
89 |
99 |
|
2018/06/20 |
100 |
83 |
|
2018/06/21 |
92 |
97 |
|
2018/06/22 |
85 |
100 |
|
2018/06/23 |
98 |
100 |
|
2018/06/24 |
93 |
100 |
|
2018/06/25 |
- |
- |
|
Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud
- Notes from meeting of 20th June:
- LHBb - A number of drive failures have resulted in 8 files lost, 4 not on name server 4 just plain lost.
- Issue with files appearing to be calculating a different checksum dependent on location, plain weird. Was thought to be a RAID issue but investigations have suggested this is not the case.