Difference between revisions of "Tier1 Operations Report 2017-03-15"
From GridPP Wiki
(→) |
m (→) |
||
Line 10: | Line 10: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 8th to 15th March 2017. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 8th to 15th March 2017. | ||
|} | |} | ||
− | * There | + | * There were a very large FTS transfer queue for Atlas during the middle of last week. Resolved by Atlas. |
* There was a problem with the Microsoft Hyper-V 2012 hypervisor cluster on Thursday afternoon 9th March. Two out of the five nodes appear to have updated a particular component - and attempted to move VMs to other nodes. It took a few hours for the system to recover. This affected a number of services including BDIIs, FTS nodes and CEs. However, the resilience built into these services meant that the operational effect was small. | * There was a problem with the Microsoft Hyper-V 2012 hypervisor cluster on Thursday afternoon 9th March. Two out of the five nodes appear to have updated a particular component - and attempted to move VMs to other nodes. It took a few hours for the system to recover. This affected a number of services including BDIIs, FTS nodes and CEs. However, the resilience built into these services meant that the operational effect was small. | ||
* We have an ongoing problem with the SRM SAM tests for Atlas which are failing a lot of the time. We have confirmed this is not affecting Atlas operationally it is just the tests that fails. We still have a GGUS ticket open with Atlas as the test appears to be problematic. | * We have an ongoing problem with the SRM SAM tests for Atlas which are failing a lot of the time. We have confirmed this is not affecting Atlas operationally it is just the tests that fails. We still have a GGUS ticket open with Atlas as the test appears to be problematic. |
Revision as of 12:17, 15 March 2017
RAL Tier1 Operations Report for 15th March 2017
Review of Issues during the week 8th to 15th March 2017. |
- There were a very large FTS transfer queue for Atlas during the middle of last week. Resolved by Atlas.
- There was a problem with the Microsoft Hyper-V 2012 hypervisor cluster on Thursday afternoon 9th March. Two out of the five nodes appear to have updated a particular component - and attempted to move VMs to other nodes. It took a few hours for the system to recover. This affected a number of services including BDIIs, FTS nodes and CEs. However, the resilience built into these services meant that the operational effect was small.
- We have an ongoing problem with the SRM SAM tests for Atlas which are failing a lot of the time. We have confirmed this is not affecting Atlas operationally it is just the tests that fails. We still have a GGUS ticket open with Atlas as the test appears to be problematic.
- There was a problem with squid systems on Sunday (12th Mar). Under high load there was a resource limitation. This is being resolved by increasing one of the OS parameters.
- Yesterday (Tuesday 14th) there were problems with Castor during a reconfiguration of the back-end database nodes. A GGUS ticket was received from LHCb - other VOs also affected.
Resolved Disk Server Issues |
- GDSS689 (AtlasDataDisk - D1T0) reported 'fsprobe' errors and was taken out of production last Wednesday (8th March). It was returned to service on Friday (10th) having had two disks replaced.
- GDSS623 (GenTape - D0T1) had one partition go read-only on Friday evening, 10th Mar. It was put back in service read-only on the SUnday (12th) so that the files awaiting migration to tape could be drained off.
Current operational status and issues |
- We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities but the level of failures is reduced as compared to a few weeks ago.
Ongoing Disk Server Issues |
- None
Limits on concurrent batch system jobs. |
- Atlas Pilot (Analysis) 1500
- CMS Multicore 460
Notable Changes made since the last meeting. |
- Increased max_filedesc parameter on squids to enable them to better cope with high load.
- ECHO: Two additional 'MON' boxes are being set-up bringing the total to five. The existing three can cope with normal activity but the additional ones would speed up recoveries and starts. Two additional gateway nodes are also being set-up (also bringing the total to five) which will improve access bandwidth.
- The first of two chillers for the R89 machine room air-conditioning has been replaced.
Declared in the GOC DB |
None
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Pending - but not yet formally announced:
- Update Castor SRMs. Propose LHCb SRMs first - target date 22nd March.
- Chiller replacement - work ongoing.
- Merge AtlasScratchDisk into larger Atlas disk pool.
Listing by category:
- Castor:
- Update SRMs to new version, including updating to SL6.
- Bring some newer disk servers ('14 generation) into service, replacing some older ('12 generation) servers.
- Databases
- Removal of "asmlib" layer on Oracle database nodes. (Ongoing)
- Networking
- Enable first services on production network with IPv6 once addressing scheme agreed.
- Infrastructure:
- Two of the chillers supplying the air-conditioning for the R89 machine room will be replaced.
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Whole site | UNSCHEDULED | OUTAGE | 09/03/2017 12:10 | 09/03/2017 13:52 | 1 hour and 42 minutes | Problems with virtualisation infrastructure, some services degraded or unavailable |
Whole site | SCHEDULED | WARNING | 08/03/2017 07:00 | 08/03/2017 11:00 | 4 hours | Warning on site during network intervention in preparation for IPv6. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
126905 | Green | Less Urgent | In Progress | 2017-03-02 | 2017-03-02 | solid | finish commissioning cvmfs server for solidexperiment.org |
126184 | Yellow | Less Urgent | In Progress | 2017-01-26 | 2017-02-07 | Atlas | Request of inputs for new sites monitoring |
124876 | Red | Less Urgent | On Hold | 2016-11-07 | 2017-01-01 | OPS | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk |
117683 | Red | Less Urgent | On Hold | 2015-11-18 | 2017-03-02 | CASTOR at RAL not publishing GLUE 2. |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC ECHO = Atlas ECHO (Template 842);CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | Atlas HC ECHO | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|---|
08/03/17 | 100 | 100 | 60 | 96 | 100 | 75 | 196 | 100 | Atlas: Ongoing problems with SRM test (Atlas Castor restarted to try and fix this - but no effect); CMS - timeouts in SRM tests. |
09/03/17 | 100 | 100 | 35 | 96 | 100 | 88 | 100 | 100 | Atlas: Ongoing problems with SRM test; CMS - timeouts in SRM tests. |
10/03/17 | 100 | 100 | 94 | 95 | 100 | 96 | 100 | 100 | Atlas: Ongoing problems with SRM test; CMS - timeouts in SRM tests. |
11/03/17 | 100 | 100 | 92 | 199 | 100 | 99 | 100 | 100 | Atlas: Ongoing problems with SRM test; CMS - timeouts in SRM tests. |
12/03/17 | 100 | 100 | 96 | 196 | 96 | 97 | 94 | 100 | Atlas: Ongoing problems with SRM test; CMS - timeouts in SRM tests. |
13/03/17 | 100 | 100 | 96 | 197 | 100 | 99 | 100 | 100 | Atlas: Ongoing problems with SRM test; CMS - timeouts in SRM tests. |
14/03/17 | 100 | 100 | 92 | 84 | 100 | 92 | 100 | 93 | Atlas & CMS - problems during work on back-end Castor databases. |
Notes from Meeting. |
- None yet.