Difference between revisions of "Tier1 Operations Report 2016-08-17"
From GridPP Wiki
(→) |
(→) |
||
Line 170: | Line 170: | ||
| Green | | Green | ||
| Less Urgent | | Less Urgent | ||
− | | | + | | Waiting Reply |
− | + | ||
| 2016-08-15 | | 2016-08-15 | ||
+ | | 2016-08-17 | ||
| CMS | | CMS | ||
| FTS gets a SIGSEGV during a transfer | | FTS gets a SIGSEGV during a transfer |
Revision as of 10:49, 17 August 2016
RAL Tier1 Operations Report for 17th August 2016
Review of Issues during the week 10th to 17th August 2016. |
- On Thursday of last week (11th Aug) it was noticed that Atlas was having a problem copying some files from us. This was traced to files that had been on disk server GDSS634 (AtlasTape). These disk copies of the files were cleaned up during an intervention on the server but the corresponding cleanup had not been done in the Castor database. This has since been resolved.
- There was a problem with the Atlas GEN instance on Friday evening (12th August). A disk failure led to problems with one of the nodes that make up the Oracle RAC cluster that hosts some of the Castor databases. The fail-over of the databases running on that node (which happened to be those for the GEN instance) did not happen quickly. The on-call team intervened and the problem was fixed late in the evening.
- There has been a problem with packet loss seen across a part of the Tier1 network. The problem has been intermittent and, so far, occurred in two blocks over the last couple of days. It is not yet understood. This problem is not linked to the doubling of the OPN link made a couple of weeks ago - which so far has worked well.
Resolved Disk Server Issues |
- None
Current operational status and issues |
- There is a problem seen by LHCb of a low but persistent rate of failure when copying the results of batch jobs to Castor. There is also a further problem that sometimes occurs when these (failed) writes are attempted to storage at other sites.
- The intermittent, low-level, load-related packet loss seen over external connections is still being tracked.
Ongoing Disk Server Issues |
- None
Notable Changes made since the last meeting. |
- The migration of Atlas data from "C" to "D" tapes has been completed.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
srm-biomed.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 04/08/2016 14:00 | 05/09/2016 14:00 | 32 days, | Storage for BIOMED is no longer supported since the removal of the GENScratch storage area. |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- Castor:
- Update SRMs to new version, including updating to SL6. This will be done after the Castor 2.1.15 update.
- Update to Castor version 2.1.15. This awaits successful resolution and testing of the new version.
- Migration of LHCb data from T10KC to T10KD tapes.
- Networking:
- Replace the UKLight Router. Then upgrade the 'bypass' link to the RAL border routers to 2*40Gbit.
- Fabric
- Firmware updates on older disk servers.
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Castor GEN instance | UNSCHEDULED | OUTAGE | 12/08/2016 18:00 | 13/08/2016 00:30 | 6 hours and 30 minutes | Castor Gen instance down due to hardware failure - service should be restored in a few hours time |
srm-biomed.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 04/08/2016 14:00 | 05/09/2016 14:00 | 32 days, | Storage for BIOMED is no longer supported since the removal of the GENScratch storage area. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
123421 | Green | Less Urgent | In Progress | 2016-08-16 | 2016-08-16 | T2K | Weird truncation of filename during LFC registration |
123403 | Green | Less Urgent | Waiting Reply | 2016-08-15 | 2016-08-17 | CMS | FTS gets a SIGSEGV during a transfer |
123382 | Green | Very Urgent | In Progress | 2016-08-12 | 2016-08-12 | LHCb | Jobs can not connect to sqlDB at CVMFS at RAL-LCG2 |
122827 | Green | Less Urgent | In Progress | 2016-07-12 | 2016-07-27 | SNO+ | Disk area at RAL |
122364 | Green | Less Urgent | In Progress | 2016-06-27 | 2016-07-15 | cvmfs support at RAL-LCG2 for solidexperiment.org | |
121687 | Red | Less Urgent | On Hold | 2016-05-20 | 2016-05-23 | packet loss problems seen on RAL-LCG perfsonar | |
120350 | Green | Less Urgent | Waiting Reply | 2016-03-22 | 2016-08-09 | LSST | Enable LSST at RAL |
117683 | Amber | Less Urgent | On Hold | 2015-11-18 | 2016-04-05 | CASTOR at RAL not publishing GLUE 2 |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 729); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
10/08/16 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
11/08/16 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
12/08/16 | 100 | 100 | 100 | 100 | 100 | 100 | N/A | |
13/08/16 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
14/08/16 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
15/08/16 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
16/08/16 | 100 | 100 | 100 | 100 | 96 | 100 | 100 | Single SRM error on listing: "No such file or directory". |