Difference between revisions of "Tier1 Operations Report 2017-06-14"
From GridPP Wiki
(→) |
(→) |
||
Line 97: | Line 97: | ||
* Increase OPN link to CERN from 2*10Gbit to 3*10Gbit links (delayed until 28th June). | * Increase OPN link to CERN from 2*10Gbit to 3*10Gbit links (delayed until 28th June). | ||
* Firmware updates in OCF 14 disk servers. | * Firmware updates in OCF 14 disk servers. | ||
+ | * Upgrade the FTS3 service to a version that will no longer support the SOAP interface. | ||
'''Listing by category:''' | '''Listing by category:''' | ||
* Castor: | * Castor: | ||
Line 105: | Line 106: | ||
** Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6). | ** Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6). | ||
* Services | * Services | ||
− | ** The production FTS needs updating. This will no longer support the | + | ** The production FTS needs updating. This will no longer support the SOAP interface. (The "test" FTS , used by Atlas, has already been upgraded.) |
<!-- ***************End Advanced warning for other interventions*************** -----> | <!-- ***************End Advanced warning for other interventions*************** -----> | ||
<!-- ************************************************************************** -----> | <!-- ************************************************************************** -----> |
Revision as of 10:06, 14 June 2017
RAL Tier1 Operations Report for 14th June 2017
Review of Issues during the week 7th to 14th June 2017. |
- There were problems with the SRMs for the Castor GEN instance over the last weekend (10/11 June) with on pf the processes failing. The problem is not yet understood.
- There have been problems with the Echo gateways over the last day. These are being worked on as this report is being prepared.
- Last week we reported that we are seeing a high rate of reported disk problems on one (the OCF '14) batch of disk servers. In some of the cases the vendor finds no fault in the drives that have been removed. We plan to update the RAID card firmware in these systems following testing of the latest version.
Resolved Disk Server Issues |
- GDSS731 (LHCbDst - D1T0) failed late Saturday night (10th June). It was returned to service Monday afternoon (12th June). A faulty disk had been replaced and the RAID array rebuild was OK.
Current operational status and issues |
- We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
- There is a problem on the site firewall which is causing problems for some specific data flows. This was being investigated in connection with videoconferencing problems. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Ongoing Disk Server Issues |
- None
Limits on concurrent batch system jobs. |
- CMS Multicore 550
Notable Changes made since the last meeting. |
- All CEs now migrated to use the load balancers in front of the argus service.
- A start has been made enabling XRootD gatweays on worker nodes for Echo access. This will be ramped up to one batch of worker nodes.
Declared in the GOC DB |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Pending - but not yet formally announced:
- Increase OPN link to CERN from 2*10Gbit to 3*10Gbit links (delayed until 28th June).
- Firmware updates in OCF 14 disk servers.
- Upgrade the FTS3 service to a version that will no longer support the SOAP interface.
Listing by category:
- Castor:
- Move to generic Castor headnodes.
- Merge AtlasScratchDisk into larger Atlas disk pool.
- Networking
- Increase OPN link to CERN from 2*10Gbit to 3*10Gbit links.
- Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
- Services
- The production FTS needs updating. This will no longer support the SOAP interface. (The "test" FTS , used by Atlas, has already been upgraded.)
Entries in GOC DB starting since the last report. |
- None
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
128954 | Green | Less Urgent | In Progress | 2017-06-14 | 2017-06-14 | SNO+ | Tape storage failure |
128830 | Green | Less Urgent | Waiting For Reply | 2017-06-07 | 2017-06-07 | Pheno | Jobs failing at RAL due errors with gfal2 |
127612 | Red | Alarm | In Progress | 2017-04-08 | 2017-05-19 | LHCb | CEs at RAL not responding |
127597 | Red | Urgent | In Progress | 2017-04-07 | 2017-05-16 | CMS | Check networking and xrootd RAL-CERN performance |
127240 | Red | Urgent | In Progress | 2017-03-21 | 2017-05-15 | CMS | Staging Test at UK_RAL for Run2 |
124876 | Red | Less Urgent | On Hold | 2016-11-07 | 2017-01-01 | OPS | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk |
117683 | Red | Less Urgent | On Hold | 2015-11-18 | 2017-05-10 | CASTOR at RAL not publishing GLUE 2. |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas Echo | Atlas HC | Atlas HC Echo | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|---|---|
07/06/17 | 100 | 100 | 100 | 92 | 100 | 100 | 99 | 100 | 100 | SRM test failures. (User timeouts). |
08/06/17 | 100 | 100 | 98 | 94 | 100 | 100 | 96 | 93 | 99 | Atlas: SRM test failure with “Host not known”; CMS: 94% (SRM test failures with “user timeout”) |
09/06/17 | 100 | 100 | 100 | 99 | 100 | 100 | 94 | 100 | 97 | SRM test failures. (User timeouts). |
10/06/17 | 100 | 100 | 100 | 96 | 100 | 100 | 100 | 100 | 100 | SRM test failures. (User timeouts). |
11/06/17 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
12/06/17 | 100 | 100 | 100 | 99 | 100 | 100 | 100 | 100 | 100 | SRM test failures. (User timeouts). |
13/06/17 | 100 | 100 | 100 | 96 | 100 | 100 | 99 | 92 | 100 | SRM test failures. (User timeouts). |
Notes from Meeting. |
- There will most probably NOT be a meeting in the next two weeks (Clashes with HEP Sysman and the WLCG Wokshop). However, a report will be produced and comments invited.
- Discussion around date for upgrading the 'production' FTS3 service which will terminate the SOAP interface to FTS3. Possible date is 7th July 2017.
- MICE have stopped data taking now. Next data-taking in September. They are ready for us to upgrade FT3 and drop the SOAP interface.