RAL Tier1 Operations Report for 26th August 2015
Review of Issues during the week 19th to 26th August 2015.
|
- On the morning of Thursday 13th August a problem - seen as failing tests - was triggered by the rollout of an invalid UK CA CRL. The problem was identified and resolved during the morning.
- What we don't know is why the invalid CRL caused things to fail - maybe old versions of fetch-crl
- A full report of the incident of the CRL issuance was given to dteam at 2015-08-18
- There has been a significant backlog of migrations to tape for Atlas as the instance is seeing very high load. Various changes have been made to improve the throughput although so far these have not made a significant difference.
- There was a problem flagged by LHCb that was last week triggered by the updates to the information provided to the information system by the ARC CEs. This was fixed on Thursday (13th).
- There had been a problem 'un-banning' a particular LHCB user. An upgrade was applied to the argus server and caches flushed to resolve this.
- We received a ticket from Atlas about a bad file that had been recalled from tape. Investigation showed that the file was larger than expected. We are working out how to profit from these "bonus bytes".
Resolved Disk Server Issues
|
- GDSS720 (AtlasDataDisk - D1T0) crashed on Tuesday 11th. The machine is put back in draining mode on the 12th. Drain still ongoing ahead of further investigations.
- GDSS707 (AtlasDataDisk - D1T0) crashed on the morning of Monday 17th Aug. The server has been put back in service readonly. It will be drained for further intervention once the drain of GDSS720 is complete.
Current operational status and issues
|
- The post mortem review of the network incident on the 8th April has been finalised. It can be seen here:
- The intermittent, low-level, load-related packet loss over the OPN to CERN is still being tracked.
- There are some on-going issues for CMS. These are a problem with the Xroot (AAA) redirection accessing Castor; Slow file open times using Xroot; and poor batch job efficiencies. The change to the Linux I/O scheduler on CMS disk servers referred to last week has improved data access rates for the worst cases of batch work (pile-up jobs).
Ongoing Disk Server Issues
|
Notable Changes made since the last meeting.
|
- On Tuesday 18th Aug the primary Tier1 router was replaced. As part of previous investigations we had been using a borrowed unit. Our own router has now been put back in operation.
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
- Upgrade of Castor disk servers to SL6. We plan to do this for the D1T0 Service Classes on the 26/27 August with an extended 'At Risk'.
- Upgrade of the Oracle databases behind Castor to version 11.2.0.4. This is a multi-step intervention. Whilst we need to confirm exact dates we are looking at the following days in September, but the dates are likely to be revised:
- Tuesday 15th September: day's Outage for Atlas & GEN.
- Tuesday 22nd September: at risk on Atlas & GEN.
- Tuesday 6th October: day's Outage for ALL instances.
- Thursday 8th October: day's at risk for ALL instances
- Tuesday 13th October: Half day outage for ALL instances.
- Some detailed internal network reconfigurations to be tackled now that the routers are stable. Notably:
- Brief (less than 20 seconds) break in internal connectivity while systems in the UPS room are re-connected.
- Replacement of cables and connectivity to the UKLIGHT router that provides our link to both the OPN Link to CERN and the bypass route for other data transfers.
- Extending the rollout of the new worker node configuration.
Listing by category:
- Databases:
- Switch LFC/3D to new Database Infrastructure.
- Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases: Castor, Atlas Frontier (LFC done)
- Castor:
- Update SRMs to new version (includes updating to SL6).
- Update the Oracle databases behind Castor to version 11.2.0.4. Will require some downtimes (See above)
- Update disk servers to SL6.
- Update to Castor version 2.1.15.
- Networking:
- Increase bandwidth of the link from the Tier1 into the RAL internal site network to 40Gbit.
- Make routing changes to allow the removal of the UKLight Router.
- Cabling/switch changes to the network in the UPS room to improve resilience.
- Fabric
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report.
|
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
Whole site
|
UNSCHEDULED
|
WARNING
|
18/08/2015 08:30
|
18/08/2015 10:00
|
1 hour and 30 minutes
|
Warning during housekeeping activities on network router. No break in connectivity expected.
|
Open GGUS Tickets (Snapshot during morning of meeting)
|
GGUS ID |
Level |
Urgency |
State |
Creation |
Last Update |
VO |
Subject
|
115573
|
Green
|
Urgent
|
In progress
|
2015-08-07
|
2015-08-14
|
CMS
|
T1_UK_RAL Consistency Check (August 2015)
|
115434
|
Green
|
Less Urgent
|
waiting for reply
|
2015-08-03
|
2015-08-07
|
SNO+
|
glite-wms-job-status warning
|
115387
|
Green
|
Less Urgent
|
In Progress
|
2015-08-03
|
2015-08-05
|
SNO+
|
XRootD for SNO+ from RAL
|
115290
|
Green
|
Less Urgent
|
On Hold
|
2015-07-28
|
2015-07-29
|
|
FTS3@RAL: missing proper host names in subjectAltName of FTS agent nodes
|
108944
|
Red
|
Less Urgent
|
In Progress
|
2014-10-01
|
2015-08-17
|
CMS
|
AAA access test failing at T1_UK_RAL
|
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day |
OPS |
Alice |
Atlas |
CMS |
LHCb |
Atlas HC |
CMS HC |
Comment
|
19/08/15 |
100 |
100 |
100 |
100 |
100 |
98 |
100 |
|
20/08/15 |
100 |
100 |
100 |
100 |
100 |
97 |
100 |
|
21/08/15 |
100 |
100 |
100 |
100 |
100 |
86 |
96 |
|
22/08/15 |
100 |
100 |
100 |
100 |
100 |
98 |
100 |
|
23/08/15 |
100 |
100 |
100 |
100 |
100 |
94 |
n/a |
|
24/08/15 |
100 |
100 |
100 |
100 |
100 |
94 |
100 |
|
25/08/15 |
100 |
95.0 |
98.0 |
100 |
100 |
97 |
98 |
Alice: Single ARC CE test failure; Atlas: Single SRM test failure.
|