Difference between revisions of "Tier1 Operations Report 2016-08-17"
From GridPP Wiki
(→) |
(→) |
||
(8 intermediate revisions by one user not shown) | |||
Line 10: | Line 10: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 10th to 17th August 2016. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 10th to 17th August 2016. | ||
|} | |} | ||
− | * There was a problem with the | + | * On Thursday of last week (11th Aug) it was noticed that Atlas was having a problem copying some files from us. This was traced to files that had been on disk server GDSS634 (AtlasTape). These disk copies of the files were cleaned up during an intervention on the server but the corresponding cleanup had not been done in the Castor database. This has since been resolved. |
− | * There has been a problem with packet loss seen across a part of the Tier1 network. The problem has been intermittent and, so far, occurred in two blocks over the last couple of days. It is not yet understood. This problem is not linked to the doubling of the OPN link made a couple of weeks ago. | + | * There was a problem with the Castor GEN instance on Friday evening (12th August). A disk failure led to problems with one of the nodes that make up the Oracle RAC cluster that hosts some of the Castor databases. The fail-over of the databases running on that node (which happened to be those for the GEN instance) did not happen quickly. The on-call team intervened and the problem was fixed late in the evening. |
+ | * There has been a problem with packet loss seen across a part of the Tier1 network. The problem has been intermittent and, so far, occurred in two blocks over the last couple of days. It is not yet understood. This problem is not linked to the doubling of the OPN link made a couple of weeks ago - which so far has worked well. | ||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 22: | Line 23: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | ||
|} | |} | ||
− | * | + | * None |
− | + | ||
<!-- ***************************************************** -----> | <!-- ***************************************************** -----> | ||
Line 33: | Line 33: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues | ||
|} | |} | ||
− | * There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites | + | * There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites. |
− | * The intermittent, low-level, load-related packet loss seen over external connections is still being tracked | + | * The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. |
<!-- ***********End Current operational status and issues*********** -----> | <!-- ***********End Current operational status and issues*********** -----> | ||
<!-- *************************************************************** -----> | <!-- *************************************************************** -----> | ||
Line 56: | Line 56: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made since the last meeting. | ||
|} | |} | ||
− | + | * The migration of Atlas data from "C" to "D" tapes has been completed. | |
− | * The migration of Atlas data from "C" to "D" tapes | + | |
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 158: | Line 157: | ||
|-style="background:#b7f1ce" | |-style="background:#b7f1ce" | ||
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject | ! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject | ||
+ | |- | ||
+ | | 123421 | ||
+ | | Green | ||
+ | | Less Urgent | ||
+ | | In Progress | ||
+ | | 2016-08-16 | ||
+ | | 2016-08-16 | ||
+ | | T2K | ||
+ | | Weird truncation of filename during LFC registration | ||
+ | |- | ||
+ | | 123403 | ||
+ | | Green | ||
+ | | Less Urgent | ||
+ | | Waiting Reply | ||
+ | | 2016-08-15 | ||
+ | | 2016-08-17 | ||
+ | | CMS | ||
+ | | FTS gets a SIGSEGV during a transfer | ||
+ | |- | ||
+ | | 123382 | ||
+ | | Green | ||
+ | | Very Urgent | ||
+ | | In Progress | ||
+ | | 2016-08-12 | ||
+ | | 2016-08-12 | ||
+ | | LHCb | ||
+ | | Jobs can not connect to sqlDB at CVMFS at RAL-LCG2 | ||
|- | |- | ||
| 122827 | | 122827 | ||
Line 178: | Line 204: | ||
|- | |- | ||
| 121687 | | 121687 | ||
− | | | + | | Red |
| Less Urgent | | Less Urgent | ||
| On Hold | | On Hold | ||
Line 194: | Line 220: | ||
| LSST | | LSST | ||
| Enable LSST at RAL | | Enable LSST at RAL | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|- | |- | ||
| 117683 | | 117683 |
Latest revision as of 08:20, 24 August 2016
RAL Tier1 Operations Report for 17th August 2016
Review of Issues during the week 10th to 17th August 2016. |
- On Thursday of last week (11th Aug) it was noticed that Atlas was having a problem copying some files from us. This was traced to files that had been on disk server GDSS634 (AtlasTape). These disk copies of the files were cleaned up during an intervention on the server but the corresponding cleanup had not been done in the Castor database. This has since been resolved.
- There was a problem with the Castor GEN instance on Friday evening (12th August). A disk failure led to problems with one of the nodes that make up the Oracle RAC cluster that hosts some of the Castor databases. The fail-over of the databases running on that node (which happened to be those for the GEN instance) did not happen quickly. The on-call team intervened and the problem was fixed late in the evening.
- There has been a problem with packet loss seen across a part of the Tier1 network. The problem has been intermittent and, so far, occurred in two blocks over the last couple of days. It is not yet understood. This problem is not linked to the doubling of the OPN link made a couple of weeks ago - which so far has worked well.
Resolved Disk Server Issues |
- None
Current operational status and issues |
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
- The intermittent, low-level, load-related packet loss seen over external connections is still being tracked.
Ongoing Disk Server Issues |
- None
Notable Changes made since the last meeting. |
- The migration of Atlas data from "C" to "D" tapes has been completed.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
srm-biomed.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 04/08/2016 14:00 | 05/09/2016 14:00 | 32 days, | Storage for BIOMED is no longer supported since the removal of the GENScratch storage area. |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- Castor:
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
- Update to Castor version 2.1.15. This awaits successful resolution and testing of the new version.
- Migration of LHCb data from T10KC to T10KD tapes.
- Networking:
- Replace the UKLight Router. Then upgrade the 'bypass' link to the RAL border routers to 2*40Gbit.
- Fabric
- Firmware updates on older disk servers.
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Castor GEN instance | UNSCHEDULED | OUTAGE | 12/08/2016 18:00 | 13/08/2016 00:30 | 6 hours and 30 minutes | Castor Gen instance down due to hardware failure - service should be restored in a few hours time |
srm-biomed.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 04/08/2016 14:00 | 05/09/2016 14:00 | 32 days, | Storage for BIOMED is no longer supported since the removal of the GENScratch storage area. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
123421 | Green | Less Urgent | In Progress | 2016-08-16 | 2016-08-16 | T2K | Weird truncation of filename during LFC registration |
123403 | Green | Less Urgent | Waiting Reply | 2016-08-15 | 2016-08-17 | CMS | FTS gets a SIGSEGV during a transfer |
123382 | Green | Very Urgent | In Progress | 2016-08-12 | 2016-08-12 | LHCb | Jobs can not connect to sqlDB at CVMFS at RAL-LCG2 |
122827 | Green | Less Urgent | In Progress | 2016-07-12 | 2016-07-27 | SNO+ | Disk area at RAL |
122364 | Green | Less Urgent | In Progress | 2016-06-27 | 2016-07-15 | cvmfs support at RAL-LCG2 for solidexperiment.org | |
121687 | Red | Less Urgent | On Hold | 2016-05-20 | 2016-05-23 | packet loss problems seen on RAL-LCG perfsonar | |
120350 | Green | Less Urgent | Waiting Reply | 2016-03-22 | 2016-08-09 | LSST | Enable LSST at RAL |
117683 | Amber | Less Urgent | On Hold | 2015-11-18 | 2016-04-05 | CASTOR at RAL not publishing GLUE 2 |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 729); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
10/08/16 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
11/08/16 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
12/08/16 | 100 | 100 | 100 | 100 | 100 | 100 | N/A | |
13/08/16 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
14/08/16 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
15/08/16 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
16/08/16 | 100 | 100 | 100 | 100 | 96 | 100 | 100 | Single SRM error on listing: "No such file or directory". |