|
|
Line 185: |
Line 185: |
| |-style="background:#b7f1ce" | | |-style="background:#b7f1ce" |
| ! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject | | ! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject |
− | |-
| |
− | | 111477
| |
− | | Green
| |
− | | Less Urgent
| |
− | | In Progress
| |
− | | 2015-01-29
| |
− | | 2015-02-02
| |
− | | CMS
| |
− | | RAL Staging Test for Run2
| |
| |- | | |- |
| | 111347 | | | 111347 |
Revision as of 16:10, 6 February 2015
RAL Tier1 Operations Report for 11th February 2015
Review of Issues during the week 4th to 11th February 2015.
|
- During the afternoon of Monday (2nd Feb) there was a problem with some database services running on non-optimal nodes within the RAC. After these were moved to their correct locations some Castor service restarts were carried out to ensure the connections to the database were correct.
- A single file was reported lost to CMS. This was an old file (2012) file picked up during a consistency check.
Resolved Disk Server Issues
|
- GDSS778 (LHCbDst - D1T0) did not come back after being shutdown for a reboot during scheduled work on Castor on Monday (2nd Feb). It was found to have a faulty disk drive. After being checked out it was returned to production during Tuesday afternoon (3rd) in read-only mode. It was put back in full production (read/write) this morning (4th Feb).
Current operational status and issues
|
- We are running with a single router connecting the Tier1 network to the site network, rather than a resilient pair.
- There was a problem overnight with (some) CMS Condor job submissions. This is not yet understood.
Ongoing Disk Server Issues
|
Notable Changes made this last week.
|
- Numbers of system reboots to pick up recent security patches.
- Application of Oracle patches to some database nodes.
- The backup CERN OPN link has been connected via a new route. Traffic was routed over the link for 24hours as a test.
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
All Castor (All SRMs)
|
SCHEDULED
|
WARNING
|
11/02/2015 08:30
|
11/02/2015 15:00
|
6 hours and 30 minutes
|
Castor services At Risk during application of regular patches to back end database systems.
|
Castor Atlas and GEN instances
|
SCHEDULED
|
WARNING
|
04/02/2015 08:30
|
04/02/2015 15:00
|
6 hours and 30 minutes
|
Castor services At Risk during application of regular patches to back end database systems.
|
Advanced warning for other interventions
|
The following items are being discussed and are still to be formally scheduled and announced.
|
- Investigate problems on the primary Tier1 router. Discussions with the vendor are ongoing.
- Physical move of 2011 disk server racks to make space for new delivery.
Listing by category:
- Databases:
- Application of Oracle PSU patches to database systems (ongoing)
- A new database (Oracle RAC) has been set-up to host the Atlas 3D database. This is updated from CERN via Oracle GoldenGate. This system is yet to be brought into use. (Currently Atlas 3D/Frontier still uses the OGMA datase system, although this was also changed to update from CERN using Oracle Golden Gate.)
- Switch LFC/3D to new Database Infrastructure.
- Update to Oracle 11.2.0.4
- Castor:
- Update SRMs to new version (includes updating to SL6).
- Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
- Update Castor to 2.1-14-latest.
- Networking:
- Resolve problems with primary Tier1 Router
- Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
- Make routing changes to allow the removal of the UKLight Router.
- Enable the RIP protocol for updating routing tables on the Tier1 routers. (Install patch to Router software).
- Fabric
- Physical move of 2011 disk server racks to make space for new delivery.
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
Entries in GOC DB starting since the last report.
|
Service
|
Scheduled?
|
Outage/At Risk
|
Start
|
End
|
Duration
|
Reason
|
Atlas and GEN Castor instances.
|
SCHEDULED
|
WARNING
|
04/02/2015 08:30
|
04/02/2015 15:00
|
6 hours and 30 minutes
|
Castor services At Risk during application of regular patches to back end database systems.
|
All Castor (all SRMs).
|
SCHEDULED
|
OUTAGE
|
02/02/2015 10:00
|
02/02/2015 12:21
|
2 hours and 21 minutes
|
Castor Storage System Stop for updates and reboots.
|
srm-atlas.gridpp.rl.ac.uk
|
SCHEDULED
|
WARNING
|
28/01/2015 09:00
|
28/01/2015 12:03
|
3 hours and 3 minutes
|
Warning while patching Castor disk servers
|
Open GGUS Tickets (Snapshot during morning of meeting)
|
GGUS ID |
Level |
Urgency |
State |
Creation |
Last Update |
VO |
Subject
|
111347
|
Green
|
Urgent
|
In Progress
|
2015-01-22
|
2015-02-03
|
CMS
|
T1_UK_RAL Consistency Check (January 2015)
|
111120
|
Green
|
Less Urgent
|
Waiting Reply
|
2015-01-12
|
2015-01-22
|
Atlas
|
large transfer errors from RAL-LCG2 to BNL-OSG2
|
109694
|
Red
|
Urgent
|
On hold
|
2014-11-03
|
2015-01-20
|
SNO+
|
gfal-copy failing for files at RAL
|
108944
|
Red
|
Urgent
|
In Progress
|
2014-10-01
|
2015-01-28
|
CMS
|
AAA access test failing at T1_UK_RAL
|
107935
|
Red
|
Less Urgent
|
On Hold
|
2014-08-27
|
2015-01-28
|
Atlas
|
BDII vs SRM inconsistent storage capacity numbers
|
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day |
OPS |
Alice |
Atlas |
CMS |
LHCb |
Atlas HC |
CMS HC |
Comment
|
04/02/15 |
100 |
100 |
99 |
33 |
100 |
100 |
99 |
CMS: CMS Problem affecting all CMS ARC CEs; Atlas: Single SRM Test failure: [SRM_FAILURE] Error trying to locate the file in the disk cache
|
05/02/15 |
100 |
100 |
100 |
75 |
100 |
100 |
99 |
CMS Problem affecting all CMS ARC CEs
|
06/02/15 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
07/02/15 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
08/02/15 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
09/02/15 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|
10/02/15 |
100 |
100 |
100 |
100 |
100 |
100 |
100 |
|