|
|
Line 12: |
Line 12: |
| |} | | |} |
| * This week has seen a numerous number of GGUS tickets being raised, all appearing to have xrootd as the common denominator. After extensive investigations it has been revealed that this issue can be tracked down to a firewall rule being recently lost (and thus, packets being dropped) - appropriate thanks to Rajan and John for providing the appropriate proof for this PM Tuesday. At time of writing, we believe the rule has been re-instated. | | * This week has seen a numerous number of GGUS tickets being raised, all appearing to have xrootd as the common denominator. After extensive investigations it has been revealed that this issue can be tracked down to a firewall rule being recently lost (and thus, packets being dropped) - appropriate thanks to Rajan and John for providing the appropriate proof for this PM Tuesday. At time of writing, we believe the rule has been re-instated. |
| + | |
| + | As this represents a major incident, a postmortem will be undertaken. |
| | | |
| <!-- ***********End Review of Issues during last week*********** -----> | | <!-- ***********End Review of Issues during last week*********** -----> |
Revision as of 09:32, 21 March 2018
RAL Tier1 Operations Report for 21st March 2018
Review of Issues during the week 14th to 21st March 2018
|
- This week has seen a numerous number of GGUS tickets being raised, all appearing to have xrootd as the common denominator. After extensive investigations it has been revealed that this issue can be tracked down to a firewall rule being recently lost (and thus, packets being dropped) - appropriate thanks to Rajan and John for providing the appropriate proof for this PM Tuesday. At time of writing, we believe the rule has been re-instated.
As this represents a major incident, a postmortem will be undertaken.
Current operational status and issues
|
- The problems with data flows through the site firewall being restricted is still present. New firewall equipment has been delivered and is scheduled for installation in a few weeks time.
Resolved Castor Disk Server Issues
|
Ongoing Castor Disk Server Issues
|
Limits on concurrent batch system jobs.
|
Notable Changes made since the last meeting.
|
Entries in GOC DB starting since the last report.
|
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
All Castor
|
SCHEDULED
|
OUTAGE
|
27/03/2018 10:00
|
27/03/2018 16:00
|
6 hours
|
Outage to Castor Storage System while back-end databases are patched. During the intervention both Castor disk and tape will be unavailable.
|
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
- Replacement (upgrade) of RAL firewall scheduled for a few weeks time.
Listing by category:
- Echo
- Applying minor CEPH update to fix the "backfill" bug. Increase number of placement groups and add more capacity.
- Castor:
- Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
- Move to generic Castor headnodes.
- Networking
- Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
- Replacement (upgrade) of RAL firewall.
- Internal
- DNS servers will be rolled out within the Tier1 network.
- Infrastructure
- Testing of power distribution boards in the R89 machine room is being scheduled for some time late July / Early August. The effect of this on our services is being discussed.
Open GGUS Tickets (Snapshot during morning of meeting)
|
Request id
|
Affected vo
|
Status
|
Priority
|
Date of creation
|
Last update
|
Type of problem
|
Subject
|
Scope
|
117683
|
none
|
on hold
|
less urgent
|
18/11/2015
|
09/03/2018
|
Information System
|
CASTOR at RAL not publishing GLUE 2
|
EGI
|
124876
|
ops
|
on hold
|
less urgent
|
07/11/2016
|
13/11/2017
|
Operations
|
[Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
|
EGI
|
127597
|
cms
|
on hold
|
urgent
|
07/04/2017
|
29/01/2018
|
File Transfer
|
Check networking and xrootd RAL-CERN performance
|
EGI
|
132589
|
lhcb
|
in progress
|
very urgent
|
21/12/2017
|
12/03/2018
|
Local Batch System
|
Killed pilots at RAL
|
WLCG
|
133619
|
cms
|
waiting for reply
|
top priority
|
21/02/2018
|
12/03/2018
|
CMS_Central Workflows
|
T1_UK_RAL Unmerged files missing
|
WLCG
|
133717
|
cms
|
in progress
|
very urgent
|
27/02/2018
|
07/03/2018
|
CMS_Data Transfers
|
RAL FTS3 Service: Significant Drop in Transfer Efficiency
|
WLCG
|
133764
|
snoplus.snolab.ca
|
in progress
|
very urgent
|
01/03/2018
|
08/03/2018
|
Information System
|
BDII missing SFU information
|
EGI
|
133992
|
atlas
|
in progress
|
less urgent
|
12/03/2018
|
14/03/2018
|
File Transfer
|
RAL-LCG2-ECHO: No such file or directory
|
EGI
|
GGUS Tickets Closed Last week
|
Request id
|
Affected vo
|
Status
|
Priority
|
Date of creation
|
Last update
|
Type of problem
|
Subject
|
Scope
|
133719
|
atlas
|
solved
|
urgent
|
27/02/2018
|
14/03/2018
|
File Transfer
|
Transfers to RAL-LCG2-ECHO failing
|
WLCG
|
133752
|
atlas
|
solved
|
very urgent
|
01/03/2018
|
14/03/2018
|
File Transfer
|
RAL FTS service appears broken
|
WLCG
|
133842
|
snoplus.snolab.ca
|
solved
|
less urgent
|
05/03/2018
|
08/03/2018
|
Other
|
File Stuck on RAL
|
EGI
|
133997
|
lhcb
|
verified
|
urgent
|
13/03/2018
|
14/03/2018
|
File Access
|
Bad data was encountered
|
WLCG
|
Day
|
ALICE
|
ATLAS
|
ATLAS-ECHO
|
CMS
|
LHCb
|
OPS
|
Comment
|
2018-03-07
|
100
|
100
|
100
|
99
|
100
|
100
|
|
2018-03-08
|
100
|
100
|
100
|
98
|
100
|
100
|
|
2018-03-09
|
100
|
100
|
100
|
98
|
100
|
100
|
|
2018-03-10
|
100
|
100
|
100
|
97
|
100
|
100
|
|
2018-03-11
|
100
|
100
|
100
|
100
|
100
|
100
|
|
2018-03-12
|
100
|
100
|
100
|
100
|
100
|
100
|
|
2018-03-13
|
100
|
98
|
100
|
100
|
100
|
100
|
|
Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud
Day |
Atlas HC |
CMS HC |
Comment
|
2018/03/07 |
100 |
100 |
|
2018/03/08 |
94 |
100 |
|
2018/03/09 |
90 |
99 |
|
2018/03/10 |
69 |
100 |
|
2018/03/11 |
82 |
99 |
|
2018/03/12 |
91 |
100 |
|
2018/04/13 |
94 |
100 |
|
- LHCb reported that the recent reprocessing campain went well. There was no recurrence of problems accessing Castor files experienced in previous such cases.