|
|
Line 11: |
Line 11: |
| * Problems with "CMSDisk" in Castor reported last week have been resolved. CMS deleted files freeing up space and three more disk servers were added to the service class. (Although one of the later has subsequently failed and is currently out of service). | | * Problems with "CMSDisk" in Castor reported last week have been resolved. CMS deleted files freeing up space and three more disk servers were added to the service class. (Although one of the later has subsequently failed and is currently out of service). |
| * There was a problem on Thurdsay with the batch farm caused by a particular (biomed) user running very large jobs. This led to problems for other VOs. The jobs were killed and the user banned on the CEs. This problem recurred over the weekend as the original banning was not placed in Quattor. The user was more again banned over the weekend. A ticket about this from LHCb was raised to an 'alarm' status. This was responded to but the alert didn't page out correctly. | | * There was a problem on Thurdsay with the batch farm caused by a particular (biomed) user running very large jobs. This led to problems for other VOs. The jobs were killed and the user banned on the CEs. This problem recurred over the weekend as the original banning was not placed in Quattor. The user was more again banned over the weekend. A ticket about this from LHCb was raised to an 'alarm' status. This was responded to but the alert didn't page out correctly. |
| + | * There was a problem between 13:40 and 13:55 yesterday (6th May) when Argus stopped mapping DNs to users and job submissioned failed. |
| <!-- ***********End Review of Issues during last week*********** -----> | | <!-- ***********End Review of Issues during last week*********** -----> |
| <!-- *********************************************************** -----> | | <!-- *********************************************************** -----> |
Revision as of 10:32, 7 May 2014
RAL Tier1 Operations Report for 7th May 2014
Review of Issues during the week 30th April and 7th May 2014.
|
- Problems with "CMSDisk" in Castor reported last week have been resolved. CMS deleted files freeing up space and three more disk servers were added to the service class. (Although one of the later has subsequently failed and is currently out of service).
- There was a problem on Thurdsay with the batch farm caused by a particular (biomed) user running very large jobs. This led to problems for other VOs. The jobs were killed and the user banned on the CEs. This problem recurred over the weekend as the original banning was not placed in Quattor. The user was more again banned over the weekend. A ticket about this from LHCb was raised to an 'alarm' status. This was responded to but the alert didn't page out correctly.
- There was a problem between 13:40 and 13:55 yesterday (6th May) when Argus stopped mapping DNs to users and job submissioned failed.
Resolved Disk Server Issues
|
- GDSS758 (CMSDisk - D1T0) failed on Friday morning (2nd May). This was a a new server thta had only gone into production the previous day. Following initial investigations the server was returned to service later that day. Since then the server has been drained and is undergoing further tests.
Current operational status and issues
|
- We have now had repeated instances where the OPN link has not cleanly failed over to the backup link during problems with the primary.
Ongoing Disk Server Issues
|
Notable Changes made this last week.
|
- Deployed gdss756 - gdss758 to cmsDisk on 1st May.
- Testing CVMFS Client version 2.1.19 ongoing.
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
srm-lhcb-tape.gridpp.rl.ac.uk,
|
SCHEDULED
|
OUTAGE
|
13/05/2014 08:00
|
13/05/2014 11:00
|
3 hours
|
Outage of tape system for update of library controller.
|
All Castor (SRM) endpoints
|
SCHEDULED
|
WARNING
|
13/05/2014 08:00
|
13/05/2014 11:00
|
3 hours
|
Outage of tape system for update of library controller.
|
lcgui02.gridpp.rl.ac.uk,
|
SCHEDULED
|
OUTAGE
|
30/04/2014 14:00
|
29/05/2014 13:00
|
28 days, 23 hours
|
Service being decommissioned.
|
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
- Provisional dates for the Catsor 2.1.14 upgrade: Nameserver: Wed 28th May; Stagers: CMS: Tuesday 10th June; LHCb: Thursday 12th June; GEN: Tuesday 17th June; Atlas: Thursday 19th June.
- FTS3 update planned for Wednesday 14th June.
Listing by category:
- Databases:
- Switch LFC/FTS/3D to new Database Infrastructure.
- Castor:
- Castor 2.1.14 testing was largely complete, although a new minor version (2.1.14-12) will be released soon.
- Networking:
- Move switches connecting recent disk servers batches ('11, '12) onto the Tier1 mesh network.
- Update core Tier1 network and change connection to site and OPN including:
- Make routing changes to allow the removal of the UKLight Router.
- Fabric
- We are phasing out the use of the software server used by the small VOs.
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
- There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during 2014.
Entries in GOC DB starting between the 30th April and 7th May 2014.
|
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
lcgui02.gridpp.rl.ac.uk,
|
SCHEDULED
|
OUTAGE
|
30/04/2014 14:00
|
29/05/2014 13:00
|
28 days, 23 hours
|
Service being decommissioned.
|
Whole site
|
SCHEDULED
|
WARNING
|
30/04/2014 10:00
|
30/04/2014 12:00
|
2 hours
|
RAL Tier1 site in warning state due to UPS/generator test.
|
Open GGUS Tickets (Snapshot during morning of meeting)
|
GGUS ID |
Level |
Urgency |
State |
Creation |
Last Update |
VO |
Subject
|
105161
|
Green
|
Less Urgent
|
In Progress
|
2014-05-05
|
2014-05-06
|
H1
|
hone jobs submitted into CREAM queues through lcgwms05.gridpp.rl.ac.uk & lcgwms06.gridpp.rl.ac.uk WMSs are are Ready status long time (more as 5 hours)
|
105100
|
Green
|
Urgent
|
In Progress
|
2014-05-02
|
2014-05-06
|
CMS
|
T1_UK_RAL Consistency Check (May14)
|
103197
|
Red
|
Less Urgent
|
Waiting Reply
|
2014-04-09
|
2014-04-09
|
|
RAL myproxy server and GridPP wiki
|
101968
|
Red
|
Less Urgent
|
In Progress
|
2014-03-11
|
2014-04-01
|
Atlas
|
RAL-LCG2_SCRATCHDISK: One dataset to delete is causing 1379 deletion errors
|
98249
|
Red
|
Urgent
|
In Progress
|
2013-10-21
|
2014-04-23
|
SNO+
|
please configure cvmfs stratum-0 for SNO+ at RAL T1
|
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day |
OPS |
Alice |
Atlas |
CMS |
LHCb |
Atlas HC |
CMS HC |
Comment
|
30/04/14 |
100 |
100 |
100 |
100 |
100 |
99 |
99 |
|
01/05/14 |
100 |
100 |
100 |
100 |
100 |
90 |
99 |
|
02/05/14 |
100 |
100 |
100 |
100 |
100 |
98 |
99 |
|
03/05/14 |
100 |
100 |
100 |
100 |
100 |
90 |
99 |
Some RAL batch problems followed by problem with Atlas Hammercloud monitoring.
|
04/05/14 |
100 |
100 |
100 |
100 |
100 |
87 |
99 |
Some RAL batch problems followed by problem with Atlas Hammercloud monitoring.
|
05/05/14 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
06/05/14 |
100 |
100 |
100 |
100 |
100 |
100 |
99 |
|