Difference between revisions of "Tier1 Operations Report 2015-10-07"
From GridPP Wiki
(→) |
(→) |
||
Line 45: | Line 45: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues | ||
|} | |} | ||
− | * | + | * GDSS657 (LHCb_RAW) failed on Saturday (3rd October). After being checked it was returned to service the following day in read-only mode. However, investigations to fix a problem on this server (failed battery on the RAID card) during yesterday's Castor outage uncovered a further problem and the server has been kept out of service. There are no files on the server awaiting migration - so it has no impact on access to LHCb files. |
<!-- ***************End Ongoing Disk Server Issues**************** -----> | <!-- ***************End Ongoing Disk Server Issues**************** -----> | ||
<!-- ************************************************************* -----> | <!-- ************************************************************* -----> |
Revision as of 10:55, 7 October 2015
RAL Tier1 Operations Report for 7th October 2015
Review of Issues during the week 30th September to 7th October 2015. |
- There was a problem running batch jobs that make use of glexec over the weekend. This was traced to a missing component on the new worker node configuration. This caused a very large loss of availability for CMS. How this problem did not become apparent until the final batch of worker nodes was upgraded has not yet been understood.
- We have been working to understand the remaining low level of packet loss seen within a part of our Tier1 network. Some housekeeping tasks have been carried out as part of this.
- LHCb have reported an ongoing low but persistent rate of failure when copying the results of batch jobs to other sites. They have also reported a current problem that sometimes occurs when writing these files to our Castor storage.
Resolved Disk Server Issues |
- GDSS636 and GDSS637 (AtlasTape - D0T1) were both taken out of service for a time on Saturday (3rd Oct). Tests were failing against these machines but subsequent analysis showed they were just suffering extremely high load.
Current operational status and issues |
- The intermittent, low-level, load-related packet loss seen over external connections is still being tracked.
- Long-standing CMS issues. The two items that remain are CMS Xroot (AAA) redirection and file open times. Work is ongoing into the Xroot redirection with a new server having been added in recent weeks. File open times using Xroot remain slow but this is a less significant problem.
Ongoing Disk Server Issues |
- GDSS657 (LHCb_RAW) failed on Saturday (3rd October). After being checked it was returned to service the following day in read-only mode. However, investigations to fix a problem on this server (failed battery on the RAID card) during yesterday's Castor outage uncovered a further problem and the server has been kept out of service. There are no files on the server awaiting migration - so it has no impact on access to LHCb files.
Notable Changes made since the last meeting. |
- The Castor Oracle database "pluto" was successfully upgraded to Oracle 11.2.0.3 yesterday (6th Oct).
- Some internal housekeeping on the Tier1 network has taken place. This has mainly been adding in some second network links where this had been previously configured.
- The final batch of worker nodes has been updated to the new configuration making use of CVMFS to pick up grid middleware.
- There was a successful UPS/Generator load test this morning (Wed. 7th Oct).
Declared in the GOC DB |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
All Castor (All SRMs) | SCHEDULED | OUTAGE | 13/10/2015 08:00 | 13/10/2015 16:00 | 8 hours | Outage of All Castor instances during upgrade of Oracle back end database. |
All Castor (All SRMs) | SCHEDULED | WARNING | 08/10/2015 08:30 | 08/10/2015 20:30 | 12 hours | Warning (At Risk) on All Castor instances during upgrade of back end Oracle database. |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
- Upgrade of remaining Castor disk servers (those in tape-backed service classes) to SL6. This will be transparent to users.
- Complete the upgrade of the Oracle databases behind Castor to version 11.2.0.4. The first steps of this multi-step upgrade has been carried out. Further dates are declared in the GOC DB with an 'at risk' on Castor tomorrow and then a shorter outage of Castor next Tuesday (13th October).
- Some detailed internal network re-configurations to enable the removal of the old 'core' switch from our network. This includes changing the way the UKLIGHT router connects into the Tier1 network.
Listing by category:
- Databases:
- Switch LFC/3D to new Database Infrastructure.
- Update to Oracle 11.2.0.4. This will affect all services that use Oracle databases: Castor, Atlas Frontier (LFC done)
- Castor:
- Update SRMs to new version (includes updating to SL6).
- Update the Oracle databases behind Castor to version 11.2.0.4. Will require some downtimes (See above)
- Update disk servers to SL6 (ongoing)
- Update to Castor version 2.1.15.
- Networking:
- Make routing changes to allow the removal of the UKLight Router.
- Fabric
- Firmware updates on remaining EMC disk arrays (Castor, LFC)
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Whole site | SCHEDULED | WARNING | 07/10/2015 10:00 | 07/10/2015 11:30 | 1 hour and 30 minutes | Warning on site during quarterly UPS/Generator load test. |
All Castor (All SRMs) | SCHEDULED | OUTAGE | 06/10/2015 08:00 | 06/10/2015 16:00 | 8 hours | Outage of All Castor instances during upgrade of Oracle back end database. |
Whole site | SCHEDULED | WARNING | 30/09/2015 07:00 | 30/09/2015 09:00 | 2 hours | At Risk during upgrade of network connection from the Tier1 into RAL site core network. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
116618 | Green | Top Priority | On Hold | 2015-10-05 | 2015-10-06 | CMS | PhEDEx DBParam renewal |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
30/09/15 | 100 | 100 | 100 | 100 | 100 | 96 | 100 | |
01/10/15 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
02/10/15 | 100 | 100 | 100 | 62 | 100 | 95 | 100 | Problem with glexec on new WN configuration when final batch upgraded. |
03/10/15 | 100 | 100 | 100 | 0 | 100 | 96 | 100 | Continuation of yesterday's problem. |
04/10/15 | 100 | 100 | 100 | 0 | 100 | 98 | 100 | Continuation of yesterday's problem. |
05/10/15 | 100 | 100 | 100 | 46 | 100 | 98 | 100 | Continuation of yesterday's problem. |
06/10/15 | 78.3 | 100 | 67 | 58 | 67 | 95 | 100 | Planned Castor outage for Oracle DB update. (Plus couple of SRM test failures for CMS). |