Difference between revisions of "Tier1 Operations Report 2015-07-01"
From GridPP Wiki
(→) |
(→) |
||
Line 24: | Line 24: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | ||
|} | |} | ||
− | * The draining of GDSS763 (AtlasDataDisk - D1T0) has been complerted and the server removed. This server failed on the 19th June. Following further crashes the | + | * The draining of GDSS763 (AtlasDataDisk - D1T0) has been complerted and the server removed. This server failed on the 19th June. Following further crashes the disks were placed in a different server/chassis so that all files could be drained off. |
<!-- ***********End Resolved Disk Server Issues*********** -----> | <!-- ***********End Resolved Disk Server Issues*********** -----> | ||
<!-- ***************************************************** -----> | <!-- ***************************************************** -----> |
Revision as of 08:40, 1 July 2015
RAL Tier1 Operations Report for 1st July 2015
Review of Issues during the week 24th June to 1st July 2015. |
- As reported at the last meeting. AtlasDataDisk in Castor became full on the morning of Wed 17th June. Four additional disk servers were added on Wednesday afternoon (17th) and a further four on Friday (19th). Note that one of the initial set of four servers (GDSS763) failed on the Friday (19th).
- During last week LHCb reported problems accessing old files in Castor which did not have a stored checksum. (These files have been like this for some years). Stored checksums have been retrospectively added to Castor for these cases.
- Last Wednesday (17th) following a problem flagged up by LHCb it was realized that we were using incorrect VOMS servers for regenerating the Castor gridmap files. This was fixed that day.
- A problem transferring files from FNAL to us is being investigated.
- Yesterday, Tuesday 23rd June, There were two network issues. The first was a spontaneous reboot of a core network switch that led to a break in connectivity to the Tier1 for around 8 minutes. The second was a very high rate of traffic around the Tier1 network that lasted for around 45 minutes from 16:00. This is not completely understood, but appears to have been caused by restarting an old hypervisor and this causing more than one copy of a particular Virtual Machine to run.
Resolved Disk Server Issues |
- The draining of GDSS763 (AtlasDataDisk - D1T0) has been complerted and the server removed. This server failed on the 19th June. Following further crashes the disks were placed in a different server/chassis so that all files could be drained off.
Current operational status and issues |
- We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
- The post mortem review of the network incident on the 8th April is being finalised.
- The intermittent, low-level, load-related packet loss over the OPN to CERN is still being tracked.
Ongoing Disk Server Issues |
- None
Notable Changes made since the last meeting. |
- On Wednesday (just after last week's meeting) a network test was carried out. Thi confirmed that non-Tier1 services on our network now route traffic avoiding our (problematic) Tier1 router pair. The test also coinfirmed the long-standing problem in the primary Tier1 router. This test paves the way for a longer intervention, with the vendor present, to try and get to the bottom of the router problem.
- Old files in Castor that were missing stored checksums have had these added.
- Eight additional disk servers have been added to AtlasDataDisk (approaching a Petabyte of extra capacity).
- The batch job limit for Alice has been completely removed. (It was set at 6000).
- A detailed change to a database procedure has been made to the LHCb Castor stager (yesterday - Tuesday 23rd) and then to CMS today (Wed 24th). This change significantly speeds up file open times within Castor.
- Files are being successfull transferred from Durham for Dirac.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Whole site | SCHEDULED | WARNING | 01/07/2015 10:00 | 01/07/2015 11:30 | 1 hour and 30 minutes | Warning on site during quarterly UPS/Generator load test. |
arc-ce05.gridpp.rl.ac.uk | SCHEDULED | OUTAGE | 10/06/2015 12:00 | 08/07/2015 12:00 | 28 days, | This particular ARC CE will be decommisioned. (Four other ARC CEs remain in production). |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- Databases:
- Switch LFC/3D to new Database Infrastructure.
- Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases: Castor, Atlas Frontier (LFC done)
- Castor:
- Update SRMs to new version (includes updating to SL6).
- Update the Oracle databses behind Castor to version 11.2.0.4. Will require some downtimes (See above)
- Update disk servers to SL6.
- Update to Castor version 2.1.15.
- Networking:
- Resolve problems with primary Tier1 Router. Need to schedule a roughly half-day outage for the vendor to carry out investigations.
- Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
- Increase bandwidth of the link from the Tier1 into the RAL internal site network to 40Gbit.
- Make routing changes to allow the removal of the UKLight Router.
- Cabling/switch changes to the network in the UPS room to improve resilience.
- Fabric
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Whole site | SCHEDULED | WARNING | 01/07/2015 10:00 | 01/07/2015 11:30 | 1 hour and 30 minutes | Warning on site during quarterly UPS/Generator load test. |
arc-ce05.gridpp.rl.ac.uk | SCHEDULED | OUTAGE | 10/06/2015 12:00 | 08/07/2015 12:00 | 28 days, | This particular ARC CE will be decommisioned. (Four other ARC CEs remain in production). |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
114512 | Green | Less Urgent | In Progress | 2015-06-12 | 2015-06-22 | Atlas | deletion errors for RAL-LCG2 |
113910 | Green | Less Urgent | Waiting Reply | 2015-05-26 | 2015-06-23 | SNO+ | RAL data staging |
113836 | Green | Less Urgent | In Progress | 2015-05-20 | 2015-06-24 | GLUE 1 vs GLUE 2 mismatch in published queues | |
112721 | Red | Less Urgent | In Progress | 2015-03-28 | 2015-06-23 | Atlas | RAL-LCG2: SOURCE Failed to get source file size |
109694 | Red | Urgent | Waiting Reply | 2014-11-03 | 2015-06-23 | SNO+ | gfal-copy failing for files at RAL |
108944 | Red | Less Urgent | In Progress | 2014-10-01 | 2015-06-16 | CMS | AAA access test failing at T1_UK_RAL |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
24/06/15 | 100 | 100 | 100 | 100 | 100 | 100 | 99 | |
25/06/15 | 100 | 100 | 100 | 100 | 100 | 97 | 99 | |
26/06/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
27/06/15 | 100 | 100 | 100 | 100 | 100 | 98 | n/a | |
28/06/15 | 100 | 100 | 100 | 100 | 100 | 98 | 100 | |
29/06/15 | 100 | 100 | 100 | 100 | 100 | 98 | 100 | |
30/06/15 | 54.9 | 100 | 100 | 100 | 100 | 85 | 100 |