Difference between revisions of "Tier1 Operations Report 2017-06-21"
From GridPP Wiki
(→) |
(→) |
||
Line 10: | Line 10: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 14th to 21st June 2017. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 14th to 21st June 2017. | ||
|} | |} | ||
− | * Problems with the Echo gateways were reported last week. These were triggered by increased load. In response the xrootd gateway was stopped for a few days and throttling applied to the GridFTP gateways. | + | * Problems with the Echo gateways were reported last week. These were triggered by increased load. In response the xrootd gateway was stopped for a few days and throttling applied to the GridFTP gateways. Performance improvements have been made in the light of this although some problems remain. The plan (currently being implemented) to add Xroot gateways to each of the worker nodes is expected to make a big improvement in this area. |
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> |
Revision as of 09:45, 21 June 2017
RAL Tier1 Operations Report for 21st June 2017
Review of Issues during the week 14th to 21st June 2017. |
- Problems with the Echo gateways were reported last week. These were triggered by increased load. In response the xrootd gateway was stopped for a few days and throttling applied to the GridFTP gateways. Performance improvements have been made in the light of this although some problems remain. The plan (currently being implemented) to add Xroot gateways to each of the worker nodes is expected to make a big improvement in this area.
Resolved Disk Server Issues |
- GDSS732 (LHCbDst - D1T0) failed early afternoon on Wednesday 14th June. It was returned to service the following day. One disk drive was replaced and one file declared lost from the failure.
- GDSS819 (LHCbDst - D1T0) failed in the early hours of Saturday morning (17th June). It was returned to service, initially read-only, later that day.
Current operational status and issues |
- We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
- There is a problem on the site firewall which is causing problems for some specific data flows. This was being investigated in connection with videoconferencing problems. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Ongoing Disk Server Issues |
- None
Limits on concurrent batch system jobs. |
- CMS Multicore 550
Notable Changes made since the last meeting. |
- Enabled XRootD gatweays on more worker nodes for Echo access.
- An internal (transparent) change to the Echo CEPH CMS pool was made to increase the number of placement groups. This is required before more hardware can be added and the total capacity upped.
Declared in the GOC DB |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Pending - but not yet formally announced:
- Increase OPN link to CERN from 2*10Gbit to 3*10Gbit links (delayed until 28th June).
- Firmware updates in OCF 14 disk servers.
- Upgrade the FTS3 service to a version that will no longer support the SOAP interface.
- Increase the number of placement groups in the Atlas CEPH pool.
Listing by category:
- Castor:
- Move to generic Castor headnodes.
- Merge AtlasScratchDisk into larger Atlas disk pool.
- Echo:
- Increase the number of placement groups in the Atlas CEPH pool.
- Networking
- Increase OPN link to CERN from 2*10Gbit to 3*10Gbit links.
- Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
- Services
- The production FTS needs updating. This will no longer support the SOAP interface. (The "test" FTS , used by Atlas, has already been upgraded.)
Entries in GOC DB starting since the last report. |
- None
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
129072 | Green | Less Urgent | In Progress | 2017-06-20 | 2017-06-20 | Please remove vo.londongrid.ac.uk from RAL-LCG2 resources | |
129059 | Green | Very Urgent | In Progress | 2017-06-20 | 2017-06-20 | LHCb | Timeouts on RAL Storage |
128991 | Green | Less Urgent | In Progress | 2017-06-16 | 2017-06-16 | Solid | solidexperiment.org CASTOR tape support |
128954 | Green | Less Urgent | In Progress | 2017-06-14 | 2017-06-20 | SNO+ | Tape storage failure |
128830 | Yellow | Less Urgent | Waiting For Reply | 2017-06-07 | 2017-06-07 | Pheno | Jobs failing at RAL due errors with gfal2 |
127612 | Red | Alarm | In Progress | 2017-04-08 | 2017-05-19 | LHCb | CEs at RAL not responding |
127597 | Red | Urgent | On Hold | 2017-04-07 | 2017-06-14 | CMS | Check networking and xrootd RAL-CERN performance |
124876 | Red | Less Urgent | On Hold | 2016-11-07 | 2017-01-01 | OPS | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk |
117683 | Red | Less Urgent | On Hold | 2015-11-18 | 2017-05-10 | CASTOR at RAL not publishing GLUE 2. |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas Echo | Atlas HC | Atlas HC Echo | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|---|---|
14/06/17 | 100 | 100 | 100 | 99 | 96 | 100 | 100 | 81 | 100 | SRM test failures. CMS: User timeouts. LHCb: There is no data for the SRM tests, the other tests seem fine. |
15/06/17 | 100 | 100 | 100 | 98 | 100 | 100 | 98 | 100 | 100 | SRM test failures. (User timeouts). |
16/06/17 | 100 | 100 | 100 | 98 | 100 | 100 | 100 | 100 | N/A | SRM test failures. (User timeouts). |
17/06/17 | 100 | 100 | 98 | 99 | 100 | 100 | 100 | 100 | 100 | SRM test failures. (User timeouts). |
18/06/17 | 100 | 100 | 100 | 94 | 95 | 100 | 100 | 100 | 100 | CMS: Could not open connection to srm-cms.gridpp.rl.ac.uk; LHCb: Could not open connection to srm-lhcb.gridpp.rl.ac.uk – Both CMS and LHCb test failures occurred at similar times (17:00 CET). |
18/06/17 | 100 | 100 | 100 | 98 | 100 | 100 | 100 | 98 | 100 | SRM test failures. (User timeouts). |
20/06/17 | 100 | 100 | 100 | 99 | 100 | 100 | 100 | 100 | 98 | Single SRM test failure. (User timeout). |
Notes from Meeting. |
- None yet. Note this is a "virtual" meeting. This report is produced and comments invited.