Difference between revisions of "Tier1 Operations Report 2017-07-05"
From GridPP Wiki
(→) |
(→) |
||
Line 116: | Line 116: | ||
| style="background-color: #7c8aaf; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Entries in GOC DB starting since the last report. | | style="background-color: #7c8aaf; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Entries in GOC DB starting since the last report. | ||
|} | |} | ||
− | + | * None | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
<!-- **********************End GOC DB Entries************************** -----> | <!-- **********************End GOC DB Entries************************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> |
Revision as of 09:43, 5 July 2017
RAL Tier1 Operations Report for 5th July 2017
Review of Issues during the week 28th June to 5th July 2017. |
- There were severe problems with the Atlas SRMs at the end of last week. On Thursday afternoon one of the SRM back end daemon process started crashing on each of the Atlas SRMs. A greatly increased number of SRM requests was also seen. Work went on through the remainder of Thursday and Friday but failed to resolve the problem. On Sunday a correction was applied to the Atlas SRMs to filter out double-slashes ("//") in the incoming requests. This was re-instating a fix that had been applied to the old SRMs back in 2014. Since then the Atlas SRMs have worked OK. Work is going on the confirm this really is the solution before applying the fix to the SRMs for the other Castor instances. The high SRM request rate seen is possibly (probably) the response of the Atlas software as it tried to query the status of files and transfers during the problem. Atlas Castor was declared down in the GOC DB from Friday afternoon to Sunday morning when the fix was applied.
Resolved Disk Server Issues |
- None
Current operational status and issues |
- We are still seeing a rate of failures of the CMS SAM tests against the SRM. These are affecting our (CMS) availabilities. CMS are also looking at file access performance and have turned off "lazy-download". The CMS SRM SAM test success rate has improved since the Castor 2.1.16 upgrade on the 25th May, although is still not 100%. It is still planned to re-visit this issue now Castor has been upgraded.
- There is a problem on the site firewall which is causing problems for some specific data flows. This was being investigated in connection with videoconferencing problems. It is expected that this is having an effect on our data that flows through the firewall (such as to/from worker nodes).
Ongoing Disk Server Issues |
- None
Limits on concurrent batch system jobs. |
- CMS Multicore 550
Notable Changes made since the last meeting. |
- Yesterday (Tuesday 27th June) the paired link to R26 was switched from 2*10Gb/s to 2*40Gb/s.
- This morning the OPN link to CERN was increased from 2*10Gb/s to 3*10Gb/s.
Declared in the GOC DB |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Pending - but not yet formally announced:
- Firmware updates in OCF 14 disk servers.
- Upgrade the FTS3 service to a version that will no longer support the SOAP interface.
- Increase the number of placement groups in the Atlas Echo CEPH pool.
Listing by category:
- Castor:
- Move to generic Castor headnodes.
- Merge AtlasScratchDisk into larger Atlas disk pool.
- Echo:
- Increase the number of placement groups in the Atlas Echo CEPH pool.
- Networking
- Enable first services on production network with IPv6 now that the addressing scheme has been agreed. (Perfsonar already working over IPv6).
- Services
- The production FTS needs updating. This will no longer support the SOAP interface. (The "test" FTS , used by Atlas, has already been upgraded.)
Entries in GOC DB starting since the last report. |
- None
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
129098 | Green | Urgent | In Progress | 2017-06-22 | 2017-06-27 | Atlas | RAL-LCG2: source / destination file transfer errors ("Connection timed out") |
129072 | Green | Less Urgent | In Progress | 2017-06-20 | 2017-06-20 | Please remove vo.londongrid.ac.uk from RAL-LCG2 resources | |
129059 | Green | Very Urgent | In Progress | 2017-06-20 | 2017-06-27 | LHCb | Timeouts on RAL Storage |
128991 | Green | Less Urgent | In Progress | 2017-06-16 | 2017-06-16 | Solid | solidexperiment.org CASTOR tape support |
127612 | Red | Alarm | In Progress | 2017-04-08 | 2017-06-27 | LHCb | CEs at RAL not responding |
127597 | Red | Urgent | On Hold | 2017-04-07 | 2017-06-14 | CMS | Check networking and xrootd RAL-CERN performance |
124876 | Red | Less Urgent | On Hold | 2016-11-07 | 2017-01-01 | OPS | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk |
117683 | Red | Less Urgent | On Hold | 2015-11-18 | 2017-05-10 | CASTOR at RAL not publishing GLUE 2. |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 845); Atlas HC Echo = Atlas Echo (Template 841);CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas Echo | Atlas HC | Atlas HC Echo | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|---|---|
28/06/17 | 100 | 100 | 100 | 99 | 100 | 100 | 100 | 100 | 100 | Single SRM test failure with User Timeout. |
29/06/17 | 100 | 100 | 98 | 100 | 100 | 100 | 99 | 100 | 100 | Single SRM test failure '[SRM_FAILURE] Unable to receive header’. |
30/06/17 | 100 | 100 | 100 | 100 | 100 | 100 | 96 | 100 | 100 | |
01/07/17 | 100 | 100 | 100 | 99 | 100 | 100 | 100 | 100 | 100 | Single SRM test failure with User Timeout. |
02/07/17 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
03/07/17 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 98 | 100 | |
04/07/17 | 100 | 100 | 100 | 97 | 100 | 100 | 100 | 100 | 92 | Two SRM test failures and a brief set of failures of Job submission tests. |
Notes from Meeting. |
- None yet