Difference between revisions of "Tier1 Operations Report 2014-07-16"
From GridPP Wiki
(→) |
(→) |
||
Line 21: | Line 21: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | ||
|} | |} | ||
− | * On Wednesday 9th July GDSS546 (CMSTape - D0T1) crashed. It was returned to service on Friday (11th). The RAID array was reporting a problem - but no failed drives were found. One file was lost at | + | * On Wednesday 9th July GDSS546 (CMSTape - D0T1) crashed. It was returned to service on Friday (11th). The RAID array was reporting a problem - but no failed drives were found. One file was lost at the time the server crashed. |
* On Thursday 10th July GDSS527 (CMSTape - D0T1) was taken out of srevice for a couple of hours to investigate why it did not see a replacement drive. | * On Thursday 10th July GDSS527 (CMSTape - D0T1) was taken out of srevice for a couple of hours to investigate why it did not see a replacement drive. | ||
− | * On Sunday 13th July GDSS720 (AtlasDataDisk - D1T0) crashed. It was returned to service the next day (Monday 14th). Fifteen | + | * On Sunday 13th July GDSS720 (AtlasDataDisk - D1T0) crashed. It was returned to service the next day (Monday 14th) although no fault was found. Fifteen files were lost at from the time the server crashed. |
<!-- ***********End Resolved Disk Server Issues*********** -----> | <!-- ***********End Resolved Disk Server Issues*********** -----> | ||
<!-- ***************************************************** -----> | <!-- ***************************************************** -----> |
Revision as of 10:00, 16 July 2014
RAL Tier1 Operations Report for 16th July 2014
Review of Issues during the week 9th to 16th July 2014. |
- There were problems with the SRM (not Castor) for the GEN instance on Thursday and Friday of last week (3/4 July). It was fixed by a database edit.
- Problems with Atlas multicore jobs on Friday 4th July. We believe it is an Atlas issue.
Resolved Disk Server Issues |
- On Wednesday 9th July GDSS546 (CMSTape - D0T1) crashed. It was returned to service on Friday (11th). The RAID array was reporting a problem - but no failed drives were found. One file was lost at the time the server crashed.
- On Thursday 10th July GDSS527 (CMSTape - D0T1) was taken out of srevice for a couple of hours to investigate why it did not see a replacement drive.
- On Sunday 13th July GDSS720 (AtlasDataDisk - D1T0) crashed. It was returned to service the next day (Monday 14th) although no fault was found. Fifteen files were lost at from the time the server crashed.
Current operational status and issues |
- We are still investigating xroot access to CMS Castor following the upgrade on the 17th June.
- There is a problem with the dteam SRM regional nagios tests, which may be caused by how dteam is published by the CIP.
Ongoing Disk Server Issues |
- None
Notable Changes made this last week. |
- Tuesday and Wednesday (8th and 9th July) Atlas Castor instance upgraded to version 2.1.14-13. Castor Atlas was returned to production at 10:40 this morning.
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
perfsonar-ps01.gridpp.rl.ac.uk, perfsonar-ps02.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 14/07/2014 11:00 | 14/08/2014 11:00 | 31 days, | Systems being decommissioned. They have been replaced by lcgps01.gridpp.rl.ac.uk and lcgps02.gridpp.rl.ac.uk |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
- We are planning the termination of the FTS2 service (announced for 2nd September) now that almost all use is on FTS3.
Listing by category:
- Databases:
- Switch LFC/FTS/3D to new Database Infrastructure.
- Castor:
- None.
- Networking:
- Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
- Make routing changes to allow the removal of the UKLight Router.
- Fabric
- We are phasing out the use of the software server used by the small VOs.
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
- There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during 2014.
Entries in GOC DB starting between the 9th and 16th July 2014. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
perfsonar-ps01.gridpp.rl.ac.uk, perfsonar-ps02.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 14/07/2014 11:00 | 14/08/2014 11:00 | 31 days, | Systems being decommissioned. They have been replaced by lcgps01.gridpp.rl.ac.uk and lcgps02.gridpp.rl.ac.uk |
Castor GEN SRMs. (srm-alice, srm-biomed, srm-dteam, srm-hone, srm-ilc, srm-mice, srm-minos, srm-na62, srm-snoplus, srm-superb, srm-t2k) | UNSCHEDULED | WARNING | 11/07/2014 17:10 | 14/07/2014 12:00 | 2 days, 18 hours and 50 minutes | There was a problem with the Castor GEN instance SRMs (Castor OK, but not the SRMs). Now improved. Setting a WARNING state over weekend. |
Castor GEN SRMs. (srm-alice, srm-biomed, srm-dteam, srm-hone, srm-ilc, srm-mice, srm-minos, srm-na62, srm-snoplus, srm-superb, srm-t2k) | UNSCHEDULED | OUTAGE | 11/07/2014 13:00 | 11/07/2014 17:00 | 4 hours | We are invesitgating a problem with the Castor GEN instance SRMs. (Castor OK, but not the SRMs). |
srm-atlas.gridpp.rl.ac.uk, | SCHEDULED | OUTAGE | 08/07/2014 06:00 | 09/07/2014 10:39 | 1 day, 4 hours and 39 minutes | Atlas Castor instance down for Castor 2.1.14 Stager Update |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
106753 | Green | Less Urgent | In Progress | 2014-07-09 | 2014-07-09 | Atlas | Errors in transfers to RAL-LCG2 |
106695 | Green | Less Urgent | In Progress | 2014-07-08 | 2014-07-08 | Ops | [Rod Dashboard] Issues detected at RAL-LCG2 |
106655 | Green | Less Urgent | In Progress | 2014-07-04 | 2014-07-04 | Ops | [Rod Dashboard] Issues detected at RAL-LCG2 (srm-dteam) |
106640 | Green | Less Urgent | In Progress | 2014-07-04 | 2014-07-04 | ILC | Failure to submit jobs to RAL-LCG2 CEs |
106610 | Green | Less Urgent | In Progress | 2014-07-02 | 2014-07-02 | HyperK | HyperK support |
106324 | Yellow | Urgent | In Progress | 2014-06-18 | 2014-07-01 | CMS | pilots losing network connections at T1_UK_RAL |
105405 | Red | Urgent | On Hold | 2014-05-14 | 2014-07-01 | please check your Vidyo router firewall configuration |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
09/07/14 | 100 | 100 | 99.3 | 96.1 | 95.8 | 100 | 100 | Central networking problem |
10/07/14 | 100 | 100 | 98.0 | 100 | 100 | 97 | 100 | srmServer restart. |
11/07/14 | 100 | 100 | 98.0 | 100 | 100 | 97 | 100 | srmServer restart. |
12/07/14 | 100 | 100 | 100 | 100 | 100 | 99 | 99 | |
13/07/14 | 100 | 100 | 100 | 100 | 100 | 100 | 99 | |
14/07/14 | 100 | 100 | 100 | 100 | 100 | 99 | 100 | |
15/07/14 | 100 | 100 | 100 | 100 | 100 | 99 | 99 |