RAL Tier1 Operations Report for 19th February 2014
Review of Issues during the week 12th to 19th February 2014.
|
- On Wednesday (12th Feb) some problems with the Castor LHCb SRM transactions were traced to a test SRM machine being incorrectly configured as an LHCb one.
- There were two breaks in the FTS3 'service' yesterday (Tuesday 18th). Each lasted of the order of 2 hours. The FTS3 systems run as VMs in the Atlas building and it was necessary to move these to another virtual infrastructure. What should have been a straightforward move of a virtual machine gave significant problems and resulted in these break in these breaks in service. During the second break the FTS database was lost - thereby loosing the queue of pending transfers.
Resolved Disk Server Issues
|
Current operational status and issues
|
- The intermittent failures of Castor access via the SRM (as seen in the availability tests) reported last week is still present. This has been seen across multiple Castor instances. The Castor team are actively working on this. They have been in contact with the Castor developers at CERN to try and find a solution and a number of approaches have been tried.
- We are participating in an extensive FTS3 test with Atlas and CMS.
Ongoing Disk Server Issues
|
Notable Changes made this last week.
|
- Following successful testing CVMFS client version 2.1.17 was rolled out across the batch farm.
- This morning (Wed 19th) there was an upgrade of the production CIP from 2.2.15-2 to 2.2.16-1.
- An outage of the FTS3 service during a network reconfiguration in preparation for the replacement of the floor in the Atlas building this morning (19th February) has been announced. It was subsequently cancelled as the work has been completed yesterday (18th). See the review of issues in the last week above.
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
The change in the way the Tier1 connection to site network, proposed for Tuesday 25th February, will now not take place that day. An alternative date for this change has not yet been agreed.
Listing by category:
- Databases:
- Switch LFC/FTS/3D to new Database Infrastructure.
- Castor:
- Castor 2.1.14 testing is ongoing. A date for deployments awaits successful completion of this testing.
- Networking:
- Implementation of new site firewall. Date for Tier1 most likely to be 17th March.
- Update core Tier1 network and change connection to site and OPN including:
- Install new Routing layer for Tier1 & change the way the Tier1 connects to the RAL network.
- These changes will lead to the removal of the UKLight Router.
- Fabric
- We are phasing out the use of the software server used by the small VOs.
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
- There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during 2014.
- The floor in the machine room in the Atlas building is being replaced. We currently run some production services on hypervisors located there. These will be moved ahead of the first part of this work (re-routing some networking) on the morning of Wednesday 19th February. We are experiencing some problems with the hypervisors for the FTS3 service which means this move will affect this service.
Entries in GOC DB starting between the 12th and 19th February 2014.
|
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
Whole Site
|
SCHEDULED
|
WARNING
|
12/02/2014 10:00
|
12/02/2014 12:00
|
2 hours
|
RAL Tier1 site in warning state due to UPS/generator test.
|
Open GGUS Tickets (Snapshot during morning of meeting)
|
GGUS ID |
Level |
Urgency |
State |
Creation |
Last Update |
VO |
Subject
|
101351
|
Green
|
Less Urgent
|
In Progress
|
2014-02-18
|
2014-02-18
|
Atlas
|
No performance markers on lcgfts3.gridpp.rl.ac.uk
|
101323
|
Green
|
Less Urgent
|
In Progress
|
2014-02-18
|
2014-02-18
|
|
Publishing default value for Max CPU Time
|
101314
|
Green
|
Very Urgent
|
In Progress
|
2014-02-17
|
2014-02-18
|
Alice
|
RAL VOBOX for ALICE in bad shape
|
101310
|
Green
|
Less Urgent
|
On Hold
|
2014-02-17
|
2014-02-17
|
|
BDII and SRM publish inconsistent storage capacity numbers
|
101164
|
Green
|
Less Urgent
|
In Progress
|
2014-02-12
|
2014-02-13
|
Atlas
|
Fair amount of "file not found" srm-atlas.gridpp.rl.ac.uk
|
101079
|
Green
|
Urgent
|
In Progress
|
2014-02-09
|
2014-02-17
|
|
ARC CEs have VOViews with a default SE of "0"
|
101052
|
Yellow
|
Urgent
|
In Progress
|
2014-02-06
|
2014-02-14
|
Biomed
|
Can't retrieve job result file from cream-ce02.gridpp.rl.ac.uk
|
101015
|
Green
|
Less Urgent
|
In Progress
|
2014-02-05
|
2014-02-06
|
CMS
|
[sr #141890] Failed PhEDEx transfers between T3_US_Minnesota and T1_UK_RAL_Buffer
|
100114
|
Red
|
Less Urgent
|
Waiting Reply
|
2014-01-08
|
2014-02-11
|
|
Jobs failing to get from RAL WMS to Imperial
|
99556
|
Red
|
Very Urgent
|
In Progress
|
2013-12-06
|
2014-02-13
|
|
NGI Argus requests for NGI_UK
|
98249
|
Red
|
Urgent
|
On Hold
|
2013-10-21
|
2014-01-29
|
SNO+
|
please configure cvmfs stratum-0 for SNO+ at RAL T1
|
97025
|
Red
|
Less urgent
|
On Hold
|
2013-09-03
|
2014-02-05
|
|
Myproxy server certificate does not contain hostname
|
Day |
OPS |
Alice |
Atlas |
CMS |
LHCb |
Comment
|
12/02/14 |
100 |
100 |
100 |
100 |
95.8 |
Single SRM test failure (SRM_FILE_BUSY)
|
13/02/14 |
100 |
100 |
100 |
100 |
95.8 |
Single SRM test failure (could not open connection to srm-lhcb.gridpp.rl.ac.uk)
|
14/02/14 |
100 |
100 |
99.7 |
100 |
100 |
Single SRM test failure on GET (SRM_FILE_BUSY)
|
15/02/14 |
100 |
100 |
100 |
100 |
100 |
|
16/02/14 |
100 |
100 |
100 |
100 |
100 |
|
17/02/14 |
100 |
100 |
100 |
95.7 |
100 |
Single SRM Put test failure (User timeout over).
|
18/02/14 |
100 |
100 |
100 |
100 |
100 |
|