Difference between revisions of "Tier1 Operations Report 2015-06-24"
From GridPP Wiki
(→) |
(→) |
||
Line 12: | Line 12: | ||
* During last week LHCb reported problems accessing old files in Castor which did not have a stored checksum. (These files have been like this for some years). Stored checksums have been retrospectively added to Castor for these cases. | * During last week LHCb reported problems accessing old files in Castor which did not have a stored checksum. (These files have been like this for some years). Stored checksums have been retrospectively added to Castor for these cases. | ||
* Last Wednesday (17th) following a problem flagged up by LHCb it was realized that we were using incorrect VOMS servers for regenerating the Castor gridmap files. This was fixed that day. | * Last Wednesday (17th) following a problem flagged up by LHCb it was realized that we were using incorrect VOMS servers for regenerating the Castor gridmap files. This was fixed that day. | ||
+ | * A problem transferring files from FNAL to us is being investigated. | ||
* Yesterday, Tuesday 23rd June, There were two network issues. The first was a spontaneous reboot of a core network switch that led to a break in connectivity to the Tier1 for around 8 minutes. The second was a very high rate of traffic around the Tier1 network that lasted for around 45 minutes from 16:00. This is not completely understood, but appears to have been caused by restarting an old hypervisor and this causing more than one copy of a particular VM to run. | * Yesterday, Tuesday 23rd June, There were two network issues. The first was a spontaneous reboot of a core network switch that led to a break in connectivity to the Tier1 for around 8 minutes. The second was a very high rate of traffic around the Tier1 network that lasted for around 45 minutes from 16:00. This is not completely understood, but appears to have been caused by restarting an old hypervisor and this causing more than one copy of a particular VM to run. | ||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> |
Revision as of 12:12, 24 June 2015
RAL Tier1 Operations Report for 24th June 2015
Review of Issues during the week 17th to 24th June 2015. |
- As reported at the last meeting. AtlasDataDisk in Castor became full on the morning of Wed 17th June. Four additional disk servers were added on Wednesday afternoon and a further four on Friday. Note that one of the initial set of four servers (GDSS763) failed on the Friday (19th).
- During last week LHCb reported problems accessing old files in Castor which did not have a stored checksum. (These files have been like this for some years). Stored checksums have been retrospectively added to Castor for these cases.
- Last Wednesday (17th) following a problem flagged up by LHCb it was realized that we were using incorrect VOMS servers for regenerating the Castor gridmap files. This was fixed that day.
- A problem transferring files from FNAL to us is being investigated.
- Yesterday, Tuesday 23rd June, There were two network issues. The first was a spontaneous reboot of a core network switch that led to a break in connectivity to the Tier1 for around 8 minutes. The second was a very high rate of traffic around the Tier1 network that lasted for around 45 minutes from 16:00. This is not completely understood, but appears to have been caused by restarting an old hypervisor and this causing more than one copy of a particular VM to run.
Resolved Disk Server Issues |
- GDSS711 (CMSDisk - D1T0) failed on Wednesday evening (17th). The server was checked out but no specific fault found. It was returned to service on Friday (19th).
Current operational status and issues |
- We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
- The post mortem review of the network incident on the 8th April is being finalised.
- The intermittent, low-level, load-related packet loss over the OPN to CERN is still being tracked.
Ongoing Disk Server Issues |
- GDSS763 (AtlasDataDisk - D1T0) failed in the early hours of Friday morning (19th June). This was one of the disk servers added to AtlasDataDisk after it became full. After initial checks it was put back in service read-only, but crashed again. The disks have now been placed in a different server/chassis and it is being drained.
Notable Changes made since the last meeting. |
- On Wednesday (just after last week's meeting) a network test was carried out. Thi confirmd that non-Tier1 services on our network now route traffic avoiding our (problematic) Tier1 router pair. The test also coinfirmed the long-standing problem in the primary Tier1 router. This test paves the way for a longer intervention, with the vendor present, to try and get to the bottom of the router problem.
- Old files in Castor that were missing stored checksums have had these added.
- Eight additional disk servers have been added to AtlasDataDisk (approaching a Petabyte of extra capacity).
- The batch job limit for Alice has been completely removed. (It was set at 6000).
- A detailed change to a database procedure has been made to the LHCb Castor stager (yesterday - Tuesday 23rd) and then to CMS(?) today (Wed 24th). This change significantly speeds up file open times within Castor.
- Files are being successfull transferred from Durham for Dirac.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
arc-ce05.gridpp.rl.ac.uk | SCHEDULED | OUTAGE | 10/06/2015 12:00 | 08/07/2015 12:00 | 28 days, | This particular ARC CE will be decommisioned. (Four other ARC CEs remain in production). |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- Databases:
- Switch LFC/3D to new Database Infrastructure.
- Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases: Castor, Atlas Frontier (LFC done)
- Castor:
- Enable disk server rebalancing.
- Update SRMs to new version (includes updating to SL6).
- Update the Oracle databses behind Castor to version 11.2.0.4. Will require some downtimes (See above)
- Update disk servers to SL6.
- Update to Castor version 2.1.15.
- Networking:
- Resolve problems with primary Tier1 Router. Need to schedule a roughly half-day outage for the vendor to carry out investigations.
- Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
- Increase bandwidth of the link from the Tier1 into the RAL internal site network to 40Gbit.
- Make routing changes to allow the removal of the UKLight Router.
- Cabling/switch changes to the network in the UPS room to improve resilience.
- Fabric
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Whole site | SCHEDULED | OUTAGE | 17/06/2015 13:45 | 17/06/2015 15:15 | 1 hour and 30 minutes | During this time window there will be a 15 minute disconnection of the RAL-LCG2 site from the network. This will take place sometime between 13:00 - 13:30 UTC. For this 15-minute period all services will be unavailable. The Castor storage system will be stopped at 12:45 UTC before the network break, and restarted once the 15-minute break is over. The declared time window allows time for Castor and other services to be checked out after the network break. The network outage is for a test of a revised network configuration. |
srm-atlas.gridpp.rl.ac.uk, | UNSCHEDULED | WARNING | 17/06/2015 10:00 | 17/06/2015 17:00 | 7 hours | AtlasDataDisk full. Working to resolve this. Other Atlas areas, and reads from AtlasDataDisk are OK. |
arc-ce05.gridpp.rl.ac.uk | SCHEDULED | OUTAGE | 10/06/2015 12:00 | 08/07/2015 12:00 | 28 days, | This particular ARC CE will be decommisioned. (Four other ARC CEs remain in production). |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
114512 | Green | Less Urgent | In Progress | 2015-06-12 | 2015-06-22 | Atlas | deletion errors for RAL-LCG2 |
113910 | Green | Less Urgent | Waiting Reply | 2015-05-26 | 2015-06-23 | SNO+ | RAL data staging |
113836 | Green | Less Urgent | In Progress | 2015-05-20 | 2015-06-24 | GLUE 1 vs GLUE 2 mismatch in published queues | |
112721 | Yellow | Less Urgent | In Progress | 2015-03-28 | 2015-06-23 | Atlas | RAL-LCG2: SOURCE Failed to get source file size |
109694 | Red | Urgent | Waiting Reply | 2014-11-03 | 2015-06-23 | SNO+ | gfal-copy failing for files at RAL |
108944 | Red | Less Urgent | In Progress | 2014-10-01 | 2015-06-16 | CMS | AAA access test failing at T1_UK_RAL |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
17/06/15 | 93.8 | 94.0 | 50.0 | 93.0 | 96.0 | 96 | 98 | Atlas: DataDiskFull; All: Tests failed during network outage (planned test). |
18/06/15 | 100 | 100 | 100 | 100 | 100 | 97 | 95 | |
19/06/15 | 100 | 100 | 98.0 | 100 | 100 | 93 | 98 | Single SRM Test failure: Could not open connection to srm-atlas |
20/06/15 | 100 | 100 | 98.0 | 100 | 100 | 100 | 100 | Single SRM Test failure: Error trying to locate the file in the disk cache |
21/06/15 | 100 | 100 | 100 | 100 | 100 | 96 | 99 | |
22/06/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
23/06/15 | 100 | 100 | 100 | 95.0 | 100 | 87 | 99 | SRM and CE test failures during local network problem. |