|
|
Line 22: |
Line 22: |
| | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues | | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues |
| |} | | |} |
− | * There is currently a scheduled three day period (July 24th - July 26th), where RAL Tier-1 will be undertaking server room circuit breaker testing. Although it is believed that these tests should not effect our services, such is the nature of these tests all Tier-1 production services should be considered "At Risk" during this time. All VOs have been asked to give consideration with respect to possible unexpected outage to any high priority/critical jobs they may wish to run during this period. | + | * ADVANCE WARNING - There is currently a scheduled three day period (July 24th - July 26th), where RAL Tier-1 will be undertaking server room circuit breaker testing. Although it is believed that these tests should not effect our services, such is the nature of these tests all Tier-1 production services should be considered "At Risk" during this time. All VOs have been asked to give consideration with respect to possible unexpected outage to any high priority/critical jobs they may wish to run during this period. |
| <!-- ***********End Current operational status and issues*********** -----> | | <!-- ***********End Current operational status and issues*********** -----> |
| <!-- *************************************************************** -----> | | <!-- *************************************************************** -----> |
Revision as of 09:45, 17 July 2018
RAL Tier1 Operations Report for 16th July 2018
Review of Issues during the week 9th July to the 16th July 2018.
|
- Tier-1 has been very much business as usual this week. The only incident of real note was the loss of an Atlas server(gdss747 - d1t0). This incident involved a lost drive that was found to be not recoverable. Subsequently a data loss declared. Under normal circumstances this would be considered a serious incident, however Atlas are all but complete in the process of migrating from CASTOR to ECHO. As such, no primary data had been lost with 89% of lost data comprising log files.
Current operational status and issues
|
- ADVANCE WARNING - There is currently a scheduled three day period (July 24th - July 26th), where RAL Tier-1 will be undertaking server room circuit breaker testing. Although it is believed that these tests should not effect our services, such is the nature of these tests all Tier-1 production services should be considered "At Risk" during this time. All VOs have been asked to give consideration with respect to possible unexpected outage to any high priority/critical jobs they may wish to run during this period.
Resolved Castor Disk Server Issues
|
Ongoing Castor Disk Server Issues
|
Limits on concurrent batch system jobs.
|
Notable Changes made since the last meeting.
|
Entries in GOC DB starting since the last report.
|
- No downtime scheduled in the GOCDB for next 2 weeks
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
Listing by category:
- Castor:
- Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
- Move to generic Castor headnodes.
- Internal
- DNS servers will be rolled out within the Tier1 network.
Open
GGUS Tickets (Snapshot taken during morning of the meeting).
|
Request id
|
Affected vo
|
Status
|
Priority
|
Date of creation
|
Last update
|
Type of problem
|
Subject
|
Scope
|
136110
|
atlas
|
in progress
|
urgent
|
13/07/2018
|
16/07/2018
|
File Transfer
|
RAL-LCG2: Transfer errors as source with "SRM_FILE_UNAVAILABLE"
|
WLCG
|
136104
|
ops
|
in progress
|
less urgent
|
13/07/2018
|
13/07/2018
|
Operations
|
[Rod Dashboard] Issues detected at RAL-LCG2
|
EGI
|
136097
|
other
|
waiting for reply
|
urgent
|
13/07/2018
|
16/07/2018
|
Operations
|
Please restart frontier-squid on RAL cvmfs stratum 1
|
EGI
|
136028
|
cms
|
in progress
|
urgent
|
10/07/2018
|
12/07/2018
|
CMS_AAA WAN Access
|
Issues reading files at T1_UK_RAL_Disk
|
WLCG
|
134685
|
dteam
|
in progress
|
less urgent
|
23/04/2018
|
09/07/2018
|
Middleware
|
please upgrade perfsonar host(s) at RAL-LCG2 to CentOS7
|
EGI
|
124876
|
ops
|
reopened
|
less urgent
|
07/11/2016
|
28/06/2018
|
Operations
|
[Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
|
EGI
|
GGUS Tickets Closed Last week
|
Request id
|
Affected vo
|
Status
|
Priority
|
Date of creation
|
Last update
|
Type of problem
|
Subject
|
Scope
|
136045
|
lhcb
|
verified
|
very urgent
|
11/07/2018
|
13/07/2018
|
File Transfer
|
Connection issue from RAL FTS?
|
WLCG
|
136002
|
cms
|
solved
|
urgent
|
09/07/2018
|
09/07/2018
|
CMS_Facilities
|
T1_UK_RAL SE Xrootd read failure
|
WLCG
|
135723
|
lhcb
|
closed
|
top priority
|
19/06/2018
|
12/07/2018
|
File Transfer
|
lcgfts3 FTS server fails all transfers
|
WLCG
|
135455
|
cms
|
closed
|
less urgent
|
31/05/2018
|
09/07/2018
|
File Transfer
|
Checksum verification at RAL
|
EGI
|
Target Availability for each site is 97.0%
|
Red <90%
|
Orange <97%
|
Day
|
Atlas
|
Atlas-Echo
|
CMS
|
LHCB
|
Alice
|
OPS
|
Comments
|
2018-07-09
|
100
|
100
|
100
|
100
|
100
|
100
|
|
2018-07-10
|
100
|
100
|
97
|
100
|
100
|
100
|
|
2018-07-11
|
100
|
100
|
100
|
100
|
100
|
100
|
|
2018-07-12
|
100
|
100
|
98
|
100
|
100
|
100
|
|
2018-07-13
|
100
|
100
|
99
|
100
|
100
|
100
|
|
2018-07-14
|
100
|
100
|
100
|
100
|
100
|
100
|
|
2018-07-15
|
100
|
100
|
100
|
92
|
100
|
100
|
|
Target Availability for each site is 97.0%
|
Red <90%
|
Orange <97%
|
Day |
Atlas HC |
CMS HC |
Comment
|
2018-07-10 |
100 |
99 |
|
2018-07-11 |
98 |
99 |
|
2018-07-12 |
97 |
99 |
|
2018-07-13 |
96 |
99 |
|
2018-07-14 |
100 |
91 |
|
2018-07-15 |
100 |
91 |
|
2018-07-16 |
94 |
99 |
|
Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud
- GGUS ticket this morning ref xrootd - failing request were hitting the external gateway - it shouldn't.
- Lots of transfers with similar failures. All failing requests hitting xrootd servers are from CMs - None from Atlas.
- Not enough resource to handle requests.
- 1. If they talk to the internal gateway will be redirected to the external - though this is not confirmed
- 2. Mapping within the node/docker container is failing. A failing config somewhere, but TB doesn't think it's our fault as Atlas is running ok for the same config.