Difference between revisions of "Tier1 Operations Report 2017-06-07"
From GridPP Wiki
(→) |
(→) |
||
Line 10: | Line 10: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 31st May to 7th June 2017. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 31st May to 7th June 2017. | ||
|} | |} | ||
− | * Following the upgrade of the Castor GEN instance last Wednesday (31st) | + | * Following the upgrade of the Castor GEN instance last Wednesday (31st) there was a problem with the 'OPS' tests against the instance that was fixed the following day. No other problems were encountered after the upgrade. |
− | A problem of ARP poisoning on the Tier1 network is affecting some monitoring used by Echo. The cause of this is now believed to be understood and the fix (a setting change in the OPN Router) has been applied. | + | * There was a problem with the CMS Castor instance on Friday (2nd June) caused by blocking sessions in the database. |
+ | * A problem of ARP poisoning on the Tier1 network is affecting some monitoring used by Echo. The cause of this is now believed to be understood and the fix (a setting change in the OPN Router) has been applied. | ||
* There is a problem on the site firewall which is causing problems for some specific data flows. This was being investigated in connection with videoconferencing problems. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes). | * There is a problem on the site firewall which is causing problems for some specific data flows. This was being investigated in connection with videoconferencing problems. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes). | ||
− | |||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> |
Revision as of 09:37, 7 June 2017
RAL Tier1 Operations Report for 7th June 2017
Review of Issues during the week 31st May to 7th June 2017. |
- Following the upgrade of the Castor GEN instance last Wednesday (31st) there was a problem with the 'OPS' tests against the instance that was fixed the following day. No other problems were encountered after the upgrade.
- There was a problem with the CMS Castor instance on Friday (2nd June) caused by blocking sessions in the database.
- A problem of ARP poisoning on the Tier1 network is affecting some monitoring used by Echo. The cause of this is now believed to be understood and the fix (a setting change in the OPN Router) has been applied.
- There is a problem on the site firewall which is causing problems for some specific data flows. This was being investigated in connection with videoconferencing problems. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Resolved Disk Server Issues |
- GDSS658 (AtlasScratchDisk - D1T0) crashed last Tuesday (30th May). It was returned to service on Thursday (1st Jue). Two disks were replaced as the testing flushed these out as problematic.
- GDSS793 (AtlasDataDisk - D1T0) was taken out of service on Monday (5th June) yesterday as two disk drives in the server were faulty. Following replacement and rebuilding of the first it was returned to service yesterday (6th June), initially read-only.
Current operational status and issues |
- We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
- LHCb Castor performance has been OK since the 2.1.16 update although this was not under and high load. A load test (mimicking the stripping/merging campaign) was carried out last Wednesday (24th May). This was successful in that the specific performance/timeout problem seen before the Castor 2.1.16 upgrade was not seen. The main limitation encountered within Castor was load on the older disk servers in the instance. Following the 2.1.16 upgrade and this (successful) test this item is being removed from the list of ongoing issues.
Ongoing Disk Server Issues |
- None
Limits on concurrent batch system jobs. |
- CMS Multicore 550
Notable Changes made since the last meeting. |
- CEs being migrated to use the load balancers in front of the argus service.
- Parameter change in the OPN Router to stop it providing proxy ARP information.
Declared in the GOC DB |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Pending - but not yet formally announced:
- Increase OPN link to CERN from 2*10Gbit to 3*10Gbit links.
Listing by category:
- Castor:
- Move to generic Castor headnodes.
- Merge AtlasScratchDisk into larger Atlas disk pool.
- Networking
- Increase OPN link to CERN from 2*10Gbit to 3*10Gbit links.
- Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
- Services
- Put argus systems behind a load balancers to improve resilience.
- The production FTS needs updating. This will no longer support the soap interface. (The "test" FTS , used by Atlas, has already been upgraded.)
Entries in GOC DB starting since the last report. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Castor GEN instace (srm-alice, srm-biomed, srm-dteam, srm-ilc, srm-mice, srm-na62, srm-pheno, srm-snoplus, srm-t2k) | SCHEDULED | OUTAGE | 31/05/2017 10:00 | 31/05/2017 12:56 | 5 hours | Upgrade of Castor GEN instance to version 2.1.16. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
127967 | Green | Less Urgent | In Progress | 2017-04-27 | 2017-06-06 | MICE | Enabling pilot role for mice VO at RAL-LCG2 |
127612 | Red | Alarm | In Progress | 2017-04-08 | 2017-05-19 | LHCb | CEs at RAL not responding |
127597 | Amber | Urgent | In Progress | 2017-04-07 | 2017-05-16 | CMS | Check networking and xrootd RAL-CERN performance |
127240 | Red | Urgent | In Progress | 2017-03-21 | 2017-05-15 | CMS | Staging Test at UK_RAL for Run2 |
124876 | Red | Less Urgent | On Hold | 2016-11-07 | 2017-01-01 | OPS | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk |
117683 | Red | Less Urgent | On Hold | 2015-11-18 | 2017-05-10 | CASTOR at RAL not publishing GLUE 2. |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas Echo | Atlas HC | Atlas HC Echo | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|---|---|
31/05/17 | 100 | 100 | 100 | 89 | 100 | 100 | 100 | 100 | 93 | CMS: Some ‘User timeout over’ on the SRM tests. There were a few CE test failures too, with ‘Job held’ and ‘unable to open remote xrootd ..’ errors. |
01/06/17 | 100 | 100 | 100 | 97 | 100 | 100 | 100 | 100 | N/A | CMS: Some SRM failures with an error of ‘ERROR: DESTINATION SRM_PUT_TURL error on the turl request : [SE][StatusOfPutRequest][SRM_FAILURE] Unable to issue PrepareToPut request to Castor’. |
02/06/17 | 100 | 100 | 100 | 92 | 100 | 100 | 100 | 100 | 100 | CMS: Some instances of “Unable to issue PrepareToPut request to Castor” |
03/06/17 | 100 | 100 | 100 | 90 | 100 | 100 | 100 | 98 | N/A | CMS: Some user timeout errors on SRM tests. |
04/06/17 | 100 | 100 | 100 | 97 | 100 | 100 | 100 | 100 | N/A | CMS: Some user timeout errors on SRM tests. |
05/06/17 | 100 | 100 | 100 | 94 | 100 | 100 | 100 | 100 | N/A | CMS: SRM test failures. “User timeout” – some on Put, some on Get. |
06/06/17 | 100 | 100 | 100 | 92 | 100 | 100 | 100 | 100 | 99 | CMS: SRM test failures. “User timeout” – some on Put, some on Get. |
Notes from Meeting. |
- None yet