|
|
Line 13: |
Line 13: |
| ** Successful patching and rebooting of Castor to pick up latest kernel and errata versions. | | ** Successful patching and rebooting of Castor to pick up latest kernel and errata versions. |
| ** LHCb needed a new version of the SRM software that respects the user-specified bringOnline timeout rather than ignoring it and defaulting to 4 hours. This upgrade of the LHCB SRM component to CASTOR version 2.1.16-18 was successfully carried out this morning. | | ** LHCb needed a new version of the SRM software that respects the user-specified bringOnline timeout rather than ignoring it and defaulting to 4 hours. This upgrade of the LHCB SRM component to CASTOR version 2.1.16-18 was successfully carried out this morning. |
− |
| |
| <!-- ***********End Review of Issues during last week*********** -----> | | <!-- ***********End Review of Issues during last week*********** -----> |
| <!-- *********************************************************** -----> | | <!-- *********************************************************** -----> |
Revision as of 11:43, 15 November 2017
RAL Tier1 Operations Report for 15th November 2017
Review of Issues during the week 9th to 15th November 2017.
|
- No significant operational problems to report. However, two successful interventions on Castor:
- Successful patching and rebooting of Castor to pick up latest kernel and errata versions.
- LHCb needed a new version of the SRM software that respects the user-specified bringOnline timeout rather than ignoring it and defaulting to 4 hours. This upgrade of the LHCB SRM component to CASTOR version 2.1.16-18 was successfully carried out this morning.
Current operational status and issues
|
- There is a problem on the site firewall which is causing problems for some specific data flows. Discussions have been taking place with the vendor. This is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Resolved Disk Server Issues
|
Ongoing Disk Server Issues
|
- GDSS775 (LHCbDst D1T0) crashed at the end of yesterday afternoon (14th). Investigations are ongoing.
Limits on concurrent batch system jobs.
|
Notable Changes made since the last meeting.
|
- Re-distribution of data in Echo onto the 2015 capacity hardware is now complete. There are now 8PBytes of usable space in Echo. However, there is some data rebalancing to be done before upping the quotas so the VOs can make use of the extra space.
- All Tier1 Castor tape servers have been upgraded to SL7.
- The "Production" FTS service was enabled for IPv6 (i.e. dual stack enabled) yesterday (14th Nov). (The Test instance had been done last week.)
Entries in GOC DB starting since the last report.
|
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
srm-lhcb.gridpp.rl.ac.uk, srm-lhcb.gridpp.rl.ac.uk,
|
SCHEDULED
|
OUTAGE
|
15/11/2017 10:00
|
15/11/2017 10:40
|
40 minutes
|
LHCb CASTOR SRM Update to 2.1.16-18
|
All Castor
|
SCHEDULED
|
OUTAGE
|
14/11/2017 09:30
|
14/11/2017 13:00
|
3 hours and 30 minutes
|
Security patching of CASTOR nodes
|
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
Ongoing or Pending - but not yet formally announced:
Listing by category:
- Castor:
- Update systems (initially tape servers) to use SL7 and configured by Quattor/Aquilon.
- Move to generic Castor headnodes.
- Echo:
- Update to next CEPH version ("Luminous").
- Networking
- Extend the number of services on the production network with IPv6 dual stack. (Done for Perfsonar, FTS3, all squids and the CVMFS Stratum-1 servers).
- Services
- The Production and "Test" (Atlas) FTS3 services will be merged and will make use of a resilient distributed database.
- Internal
- DNS servers will be rolled out within the Tier1 network.
Open GGUS Tickets (Snapshot during morning of meeting)
|
GGUS ID |
Level |
Urgency |
State |
Creation |
Last Update |
VO |
Subject
|
131815
|
Green
|
Urgent
|
Assigned
|
2017-11-14
|
2017-11-14
|
Other
|
solidexperiment.org CASTOR tape copy fails
|
131815
|
Green
|
Less Urgent
|
Assigned
|
2017-11-13
|
2017-11-13
|
T2K.Org
|
Extremely long download times for T2K files on tape at RAL
|
130207
|
Red
|
Urgent
|
On Hold
|
2017-08-24
|
2017-11-13
|
MICE
|
Timeouts when copyiing MICE reco data to CASTOR
|
127597
|
Red
|
Urgent
|
Assigned
|
2017-04-07
|
2017-10-05
|
CMS
|
Check networking and xrootd RAL-CERN performance
|
124876
|
Red
|
Less Urgent
|
On Hold
|
2016-11-07
|
2017-11-13
|
Ops
|
[Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk
|
117683
|
Red
|
Less Urgent
|
Waiting for Reply
|
2015-11-18
|
2017-11-06
|
|
CASTOR at RAL not publishing GLUE 2
|
|
Day |
OPS |
Alice |
Atlas |
CMS |
LHCb |
Atlas Echo |
Comment
|
08/11/17 |
100 |
100 |
98 |
46 |
100 |
100 |
|
09/11/17 |
100 |
100 |
100 |
100 |
100 |
100 |
|
10/11/17 |
100 |
100 |
100 |
100 |
100 |
100 |
|
11/11/17 |
100 |
100 |
100 |
100 |
100 |
100 |
|
12/11/17 |
100 |
100 |
100 |
100 |
100 |
100 |
|
13/11/17 |
100 |
100 |
100 |
100 |
100 |
100 |
|
14/11/17 |
100 |
100 |
85 |
83 |
100 |
100 |
CASTOR patch update.
|
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud
Day |
Atlas HC |
Atlas HC Echo |
CMS HC |
Comment
|
08/11/17 |
100 |
100 |
46 |
|
09/11/17 |
100 |
98 |
100 |
|
10/11/17 |
100 |
97 |
100 |
|
11/11/17 |
100 |
100 |
100 |
|
12/11/17 |
100 |
100 |
100 |
|
13/11/17 |
100 |
100 |
100 |
|
14/11/17 |
96 |
100 |
39 |
|
- Following the merger of AtlasScratchDisk with AtlasDataDisk in Castor some months ago - all data sitting on the older disk servers that made up AtlsScratchDisk has now been moved (or removed). These servers will be decommissioned.