Difference between revisions of "Tier1 Operations Report 2016-08-10"
From GridPP Wiki
(→) |
(→) |
||
(4 intermediate revisions by one user not shown) | |||
Line 10: | Line 10: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 3rd to 10th August 2016. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 3rd to 10th August 2016. | ||
|} | |} | ||
− | * There | + | * There was more saturation of the inbound 10Gbit OPN link. In response the connection was reconfigured to use both links on 4th August, effectively upgrading to a 20Gbit connection. |
* There was a problem with the AtlasScratchDisk yesterday evening (9th August). This was fixed by the oncall team by restarting one of the SRMs. There was an effect on Atlas SAM tests and batch queue ANALY_RAL_SL6 was put offline from 18:50 to 00:20. | * There was a problem with the AtlasScratchDisk yesterday evening (9th August). This was fixed by the oncall team by restarting one of the SRMs. There was an effect on Atlas SAM tests and batch queue ANALY_RAL_SL6 was put offline from 18:50 to 00:20. | ||
+ | * We are seeing some errors in Castor relating to missing CAs. These started on the LHCb instance on the 25th July and the GEN instance on the 8th August. However, we don't have enough information to know which CAs these correspond to. | ||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 22: | Line 23: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | ||
|} | |} | ||
− | * | + | * GDSS634 (AtlasTape, D0T1) crashed on Friday 29th July and was returned to service on Monday (8th August). The server has crashed before. All its disks sere replaced and it was re-acceptance tested before being put back into service. |
+ | * GDSS748 (AtlasDataDisk, D1T0) has been returned to service. This went out of service in June and was completely drained ahead of being fully checked out. | ||
<!-- ***************************************************** -----> | <!-- ***************************************************** -----> | ||
Line 44: | Line 46: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues | ||
|} | |} | ||
− | * | + | * None |
<!-- ***************End Ongoing Disk Server Issues**************** -----> | <!-- ***************End Ongoing Disk Server Issues**************** -----> | ||
<!-- ************************************************************* -----> | <!-- ************************************************************* -----> | ||
Line 98: | Line 100: | ||
|} | |} | ||
<!-- ******* still to be formally scheduled and/or announced ******* -----> | <!-- ******* still to be formally scheduled and/or announced ******* -----> | ||
− | |||
'''Listing by category:''' | '''Listing by category:''' | ||
* Castor: | * Castor: | ||
** Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update. | ** Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update. | ||
** Update to Castor version 2.1.15. This awaits successful resolution and testing of the new version. | ** Update to Castor version 2.1.15. This awaits successful resolution and testing of the new version. | ||
− | ** Migration of data from T10KC to T10KD tapes | + | ** Migration of LHCb data from T10KC to T10KD tapes. |
* Networking: | * Networking: | ||
** Replace the UKLight Router. Then upgrade the 'bypass' link to the RAL border routers to 2*40Gbit. | ** Replace the UKLight Router. Then upgrade the 'bypass' link to the RAL border routers to 2*40Gbit. |
Latest revision as of 10:57, 10 August 2016
RAL Tier1 Operations Report for 10th August 2016
Review of Issues during the week 3rd to 10th August 2016. |
- There was more saturation of the inbound 10Gbit OPN link. In response the connection was reconfigured to use both links on 4th August, effectively upgrading to a 20Gbit connection.
- There was a problem with the AtlasScratchDisk yesterday evening (9th August). This was fixed by the oncall team by restarting one of the SRMs. There was an effect on Atlas SAM tests and batch queue ANALY_RAL_SL6 was put offline from 18:50 to 00:20.
- We are seeing some errors in Castor relating to missing CAs. These started on the LHCb instance on the 25th July and the GEN instance on the 8th August. However, we don't have enough information to know which CAs these correspond to.
Resolved Disk Server Issues |
- GDSS634 (AtlasTape, D0T1) crashed on Friday 29th July and was returned to service on Monday (8th August). The server has crashed before. All its disks sere replaced and it was re-acceptance tested before being put back into service.
- GDSS748 (AtlasDataDisk, D1T0) has been returned to service. This went out of service in June and was completely drained ahead of being fully checked out.
Current operational status and issues |
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites. A recent modification has improved, but not completed fixed this.
- The intermittent, low-level, load-related packet loss seen over external connections is still being tracked. Likewise we have been working to understand some remaining low level of packet loss seen within a part of our Tier1 network.
Ongoing Disk Server Issues |
- None
Notable Changes made since the last meeting. |
- The use of both the CERN OPN links was enabled on Thursday morning, 4th August. This gives us a maximum of 20Gbit bandwidth. During the morning we then made some tests dropping down each if the links in turn to confirm failback to 10Gbit. These test were successful.
- The migration of Atlas data from "C" to "D" tapes is virtually complete with only around 15 tapes left to migrate out of the total of 1300.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
srm-biomed.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 04/08/2016 14:00 | 05/09/2016 14:00 | 32 days, | Storage for BIOMED is no longer supported since the removal of the GENScratch storage area. |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- Castor:
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
- Update to Castor version 2.1.15. This awaits successful resolution and testing of the new version.
- Migration of LHCb data from T10KC to T10KD tapes.
- Networking:
- Replace the UKLight Router. Then upgrade the 'bypass' link to the RAL border routers to 2*40Gbit.
- Fabric
- Firmware updates on older disk servers.
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
srm-biomed.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 04/08/2016 14:00 | 05/09/2016 14:00 | 32 days, | Storage for BIOMED is no longer supported since the removal of the GENScratch storage area. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
122827 | Green | Less Urgent | In Progress | 2016-07-12 | 2016-07-27 | SNO+ | Disk area at RAL |
122364 | Green | Less Urgent | In Progress | 2016-06-27 | 2016-07-15 | cvmfs support at RAL-LCG2 for solidexperiment.org | |
121687 | Amber | Less Urgent | On Hold | 2016-05-20 | 2016-05-23 | packet loss problems seen on RAL-LCG perfsonar | |
120350 | Green | Less Urgent | Waiting Reply | 2016-03-22 | 2016-08-09 | LSST | Enable LSST at RAL |
119841 | Red | Less Urgent | On Hold | 2016-03-01 | 2016-08-09 | LHCb | HTTP support for lcgcadm04.gridpp.rl.ac.uk |
117683 | Amber | Less Urgent | On Hold | 2015-11-18 | 2016-04-05 | CASTOR at RAL not publishing GLUE 2 |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 729); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
03/08/16 | 100 | 100 | 100 | 98 | 96 | 100 | 100 | CMS: Single SRM Test failure on Get (File creation canceled since diskPool is full); LHCb: Single SRM test failure on List (SRM_FILE_BUSY). |
04/08/16 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
05/08/16 | 100 | 100 | 100 | 98 | 100 | 100 | N/A | Single SRM test failure on GET: User timeout over |
06/08/16 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
07/08/16 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
08/08/16 | 100 | 100 | 100 | 100 | 96 | 100 | 100 | Single SRM test failure on list then delete: (No such file or directory) |
09/08/16 | 100 | 100 | 94 | 100 | 100 | 91 | N/A | Problem with one of the Atlas Castor SRMs. |