Difference between revisions of "Tier1 Operations Report 2015-07-01"
From GridPP Wiki
(→) |
(→) |
||
(13 intermediate revisions by one user not shown) | |||
Line 9: | Line 9: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 24th June to 1st July 2015. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 24th June to 1st July 2015. | ||
|} | |} | ||
− | * | + | * LHCb have flagged up a problem running jobs on the batch nodes being used to test the new worker node configuration. A likely cause has been identified and a fix is being rolled out. |
− | + | * A configuration error with the network interface on one of the new batches of worker nodes was identified last week. These have been drained out and re-installed to fix this. They are expected to go back into production imminently. | |
− | + | * A problem transferring some files from FNAL to us is being investigated. | |
− | * A problem transferring files from FNAL to us is being investigated. | + | * There was a problem with the test FTS3 service at the end of last week. The underlying cause having been triggered by the problem that affected our network on Tuesday (23rd June) and reported last week. Atlas flipped transfers to use the production service. |
− | * | + | |
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 24: | Line 23: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | ||
|} | |} | ||
− | * | + | * The draining of GDSS763 (AtlasDataDisk - D1T0) has been completed and the server removed. This server failed on the 19th June. Following further crashes the disks were placed in a different server/chassis so that all files could be drained off. 13 files were reported lost to Atlas. All these were being written at around the time the server initially failed. |
<!-- ***********End Resolved Disk Server Issues*********** -----> | <!-- ***********End Resolved Disk Server Issues*********** -----> | ||
<!-- ***************************************************** -----> | <!-- ***************************************************** -----> | ||
Line 48: | Line 47: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues | ||
|} | |} | ||
− | * | + | * None |
<!-- ***************End Ongoing Disk Server Issues**************** -----> | <!-- ***************End Ongoing Disk Server Issues**************** -----> | ||
<!-- ************************************************************* -----> | <!-- ************************************************************* -----> | ||
Line 59: | Line 58: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | ||
|} | |} | ||
− | * | + | * The change to a database procedure to speeds up file open times within Castor was made to the Atlas instance on Monday (29th June). It will be made on the GEN instance today. (Last week we reported that the change had already been made for the LHCb & CMS Castor instances.) |
− | + | * An updated version of xrootd (upgraded from version 3.3.3 to 3.3.6) was applied to the Castor GEN instance to enable 3rd party transfers for Alice. | |
− | * | + | |
− | + | ||
− | + | ||
− | + | ||
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 85: | Line 80: | ||
! Reason | ! Reason | ||
|- | |- | ||
− | | arc-ce05.gridpp.rl.ac.uk | + | | Whole site |
+ | | SCHEDULED | ||
+ | | WARNING | ||
+ | | 01/07/2015 10:00 | ||
+ | | 01/07/2015 11:30 | ||
+ | | 1 hour and 30 minutes | ||
+ | | Warning on site during quarterly UPS/Generator load test. | ||
+ | |- | ||
+ | | arc-ce05.gridpp.rl.ac.uk | ||
| SCHEDULED | | SCHEDULED | ||
| OUTAGE | | OUTAGE | ||
Line 174: | Line 177: | ||
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject | ! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject | ||
|- | |- | ||
− | | | + | | 114659 |
| Green | | Green | ||
− | | | + | | Top Priority |
| In Progress | | In Progress | ||
− | | 2015-06- | + | | 2015-06-26 |
− | | 2015-06- | + | | 2015-06-30 |
− | | | + | | LHCb |
− | | | + | | Infrastructure problem at RAL |
|- | |- | ||
| 113910 | | 113910 | ||
Line 200: | Line 203: | ||
| | | | ||
| GLUE 1 vs GLUE 2 mismatch in published queues | | GLUE 1 vs GLUE 2 mismatch in published queues | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|- | |- | ||
| 108944 | | 108944 |
Latest revision as of 13:22, 1 July 2015
RAL Tier1 Operations Report for 1st July 2015
Review of Issues during the week 24th June to 1st July 2015. |
- LHCb have flagged up a problem running jobs on the batch nodes being used to test the new worker node configuration. A likely cause has been identified and a fix is being rolled out.
- A configuration error with the network interface on one of the new batches of worker nodes was identified last week. These have been drained out and re-installed to fix this. They are expected to go back into production imminently.
- A problem transferring some files from FNAL to us is being investigated.
- There was a problem with the test FTS3 service at the end of last week. The underlying cause having been triggered by the problem that affected our network on Tuesday (23rd June) and reported last week. Atlas flipped transfers to use the production service.
Resolved Disk Server Issues |
- The draining of GDSS763 (AtlasDataDisk - D1T0) has been completed and the server removed. This server failed on the 19th June. Following further crashes the disks were placed in a different server/chassis so that all files could be drained off. 13 files were reported lost to Atlas. All these were being written at around the time the server initially failed.
Current operational status and issues |
- We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
- The post mortem review of the network incident on the 8th April is being finalised.
- The intermittent, low-level, load-related packet loss over the OPN to CERN is still being tracked.
Ongoing Disk Server Issues |
- None
Notable Changes made since the last meeting. |
- The change to a database procedure to speeds up file open times within Castor was made to the Atlas instance on Monday (29th June). It will be made on the GEN instance today. (Last week we reported that the change had already been made for the LHCb & CMS Castor instances.)
- An updated version of xrootd (upgraded from version 3.3.3 to 3.3.6) was applied to the Castor GEN instance to enable 3rd party transfers for Alice.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Whole site | SCHEDULED | WARNING | 01/07/2015 10:00 | 01/07/2015 11:30 | 1 hour and 30 minutes | Warning on site during quarterly UPS/Generator load test. |
arc-ce05.gridpp.rl.ac.uk | SCHEDULED | OUTAGE | 10/06/2015 12:00 | 08/07/2015 12:00 | 28 days, | This particular ARC CE will be decommisioned. (Four other ARC CEs remain in production). |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- Databases:
- Switch LFC/3D to new Database Infrastructure.
- Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases: Castor, Atlas Frontier (LFC done)
- Castor:
- Update SRMs to new version (includes updating to SL6).
- Update the Oracle databses behind Castor to version 11.2.0.4. Will require some downtimes (See above)
- Update disk servers to SL6.
- Update to Castor version 2.1.15.
- Networking:
- Resolve problems with primary Tier1 Router. Need to schedule a roughly half-day outage for the vendor to carry out investigations.
- Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
- Increase bandwidth of the link from the Tier1 into the RAL internal site network to 40Gbit.
- Make routing changes to allow the removal of the UKLight Router.
- Cabling/switch changes to the network in the UPS room to improve resilience.
- Fabric
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Whole site | SCHEDULED | WARNING | 01/07/2015 10:00 | 01/07/2015 11:30 | 1 hour and 30 minutes | Warning on site during quarterly UPS/Generator load test. |
arc-ce05.gridpp.rl.ac.uk | SCHEDULED | OUTAGE | 10/06/2015 12:00 | 08/07/2015 12:00 | 28 days, | This particular ARC CE will be decommisioned. (Four other ARC CEs remain in production). |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
114659 | Green | Top Priority | In Progress | 2015-06-26 | 2015-06-30 | LHCb | Infrastructure problem at RAL |
113910 | Green | Less Urgent | Waiting Reply | 2015-05-26 | 2015-06-23 | SNO+ | RAL data staging |
113836 | Green | Less Urgent | In Progress | 2015-05-20 | 2015-06-24 | GLUE 1 vs GLUE 2 mismatch in published queues | |
108944 | Red | Less Urgent | In Progress | 2014-10-01 | 2015-06-16 | CMS | AAA access test failing at T1_UK_RAL |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
24/06/15 | 100 | 100 | 100 | 100 | 100 | 100 | 99 | |
25/06/15 | 100 | 100 | 100 | 100 | 100 | 97 | 99 | |
26/06/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
27/06/15 | 100 | 100 | 100 | 100 | 100 | 98 | n/a | |
28/06/15 | 100 | 100 | 100 | 100 | 100 | 98 | 100 | |
29/06/15 | 100 | 100 | 100 | 100 | 100 | 98 | 100 | |
30/06/15 | 54.9 | 100 | 100 | 100 | 100 | 85 | 100 |