Difference between revisions of "Tier1 Operations Report 2019-03-26"
From GridPP Wiki
(→) |
(→) |
||
(8 intermediate revisions by one user not shown) | |||
Line 10: | Line 10: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 18th March 2019 to the 26th March 2019. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 18th March 2019 to the 26th March 2019. | ||
|} | |} | ||
− | * | + | * Worker nodes on the batch farm had been kernel patched and rebooted. The Viglen 2011 tranche of machines not be returned to production (most of them are broken). |
− | * | + | * Tier-1 continues to be involved in the on-going Security challenge - we are now in the evaluation phase. |
− | * | + | * SAM tests are frequently not reporting periods of monitoring. |
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 64: | Line 64: | ||
! Comments | ! Comments | ||
|- | |- | ||
− | | | + | | gdss733 |
− | | | + | | LHCb |
− | | | + | | lhcbDst |
− | | | + | | d1t0 |
| - | | - | ||
|} | |} | ||
Line 191: | Line 191: | ||
! Subject | ! Subject | ||
! Scope | ! Scope | ||
+ | ! Solution | ||
|- | |- | ||
| style="background-color: lightgreen;" | 140385 | | style="background-color: lightgreen;" | 140385 | ||
| cms | | cms | ||
− | | | + | | in progress |
| less urgent | | less urgent | ||
| 25/03/2019 | | 25/03/2019 | ||
Line 201: | Line 202: | ||
| Data Transfer problems at T1_UK_RAL | | Data Transfer problems at T1_UK_RAL | ||
| WLCG | | WLCG | ||
+ | | | ||
|- | |- | ||
| style="background-color: lightgreen;" | 140220 | | style="background-color: lightgreen;" | 140220 | ||
Line 211: | Line 213: | ||
| mice LFC to DFC transition | | mice LFC to DFC transition | ||
| EGI | | EGI | ||
+ | | | ||
|- | |- | ||
− | | style="background-color: | + | | style="background-color: yellow;" | 139672 |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
| other | | other | ||
| in progress | | in progress | ||
Line 231: | Line 224: | ||
| No LIGO pilots running at RAL | | No LIGO pilots running at RAL | ||
| EGI | | EGI | ||
+ | | | ||
|- | |- | ||
| style="background-color: red;" | 138665 | | style="background-color: red;" | 138665 | ||
Line 241: | Line 235: | ||
| Problem accessing LFC at RAL | | Problem accessing LFC at RAL | ||
| EGI | | EGI | ||
+ | | | ||
|} | |} | ||
<!-- **********************End Availability Report************************** -----> | <!-- **********************End Availability Report************************** -----> | ||
Line 265: | Line 260: | ||
! Subject | ! Subject | ||
! Scope | ! Scope | ||
+ | ! Solution | ||
|- | |- | ||
− | | | + | | 140283 |
− | | | + | | atlas |
| solved | | solved | ||
| less urgent | | less urgent | ||
− | |||
| 19/03/2019 | | 19/03/2019 | ||
− | | | + | | 20/03/2019 |
− | | | + | | File Transfer |
− | | | + | | UK RAL-LCG2: DESTINATION SRM_PUT_TURL error |
+ | | WLCG | ||
+ | | Corrected ownership of the CASTOR dir to atlas001:atlas | ||
+ | |- | ||
+ | | 140278 | ||
+ | | cms | ||
+ | | solved | ||
+ | | urgent | ||
+ | | 19/03/2019 | ||
+ | | 22/03/2019 | ||
+ | | CMS_Data Transfers | ||
+ | | Transfers failing from FNAL_Buffer to RAL_Disk | ||
+ | | WLCG | ||
+ | | we had a problem with ECHO mistakenly returning the correct size and checksum for files which failed transfers and were corrupt which resulted in them being marked as good by PhEDEx and then lying latent until they were accessed or transferred out. | ||
+ | |||
+ | Transfers recovered. | ||
|- | |- | ||
| 140210 | | 140210 | ||
Line 285: | Line 295: | ||
| Cannot access some files | | Cannot access some files | ||
| WLCG | | WLCG | ||
+ | | These files were lost from RAL-LCG2-ECHO_DATADISK, probably due to an FTS bug which should now be fixed. They were identified here because they were needed by the production team. The ticket submitter has now marked them lost in Rucio. | ||
+ | |||
+ | Independently, we have been analysing the results of the Rucio consistency checker. It identified 163 files lost from this disk. The 5 files listed in this ticket are among the 163. We will mark the remaining files lost in Rucio. | ||
|- | |- | ||
| 140177 | | 140177 | ||
Line 295: | Line 308: | ||
| RAL FTS - Transfers failing from T1_US_FNAL_Disk to some sites | | RAL FTS - Transfers failing from T1_US_FNAL_Disk to some sites | ||
| WLCG | | WLCG | ||
+ | | IPv6 was registered, although not fully enabled at FNAL. RAL had IPv6 fully enabled, so expected to transfer via that connection, however it was timing out and not falling back to IPv4. FNAL removed the IPv6 address, and since then the link has been green. | ||
|- | |- | ||
− | | | + | | 140082 |
− | | | + | | atlas |
− | | | + | | closed |
− | | | + | | less urgent |
− | | | + | | 06/03/2019 |
− | | | + | | 21/03/2019 |
| Other | | Other | ||
− | | | + | | RAL-LCG2 squid service degraded |
| WLCG | | WLCG | ||
+ | | squid01 node is back serving requests. | ||
+ | There was an overnight problem with the underlying storage for this VM node. | ||
+ | |- | ||
+ | | 139990 | ||
+ | | cms | ||
+ | | solved | ||
+ | | urgent | ||
+ | | 01/03/2019 | ||
+ | | 20/03/2019 | ||
+ | | CMS_AAA WAN Access | ||
+ | | T1_UK_RAL xrootd segfaulted | ||
+ | | WLCG | ||
+ | | The issue is intermittent and occasional. This segfault is most likely a known problem, which will be addressed in xrootd version 4.9, along with other fixes. RAL will install this when available. | ||
+ | |- | ||
+ | | 139983 | ||
+ | | t2k.org | ||
+ | | closed | ||
+ | | less urgent | ||
+ | | 28/02/2019 | ||
+ | | 21/03/2019 | ||
+ | | File Access | ||
+ | | Failed to bring files online at RAL-tape | ||
+ | | EGI | ||
+ | | See above. | ||
|- | |- | ||
| 139858 | | 139858 | ||
Line 315: | Line 353: | ||
| Failing transfers of 7 files from T1_UK_RAL_Disk to T2_FR_GRIF_LLR | | Failing transfers of 7 files from T1_UK_RAL_Disk to T2_FR_GRIF_LLR | ||
| WLCG | | WLCG | ||
+ | | Missing chunks, so deleted the files and they are now retransferred. | ||
|- | |- | ||
− | | | + | | 139639 |
+ | | cms | ||
+ | | closed | ||
+ | | very urgent | ||
+ | | 12/02/2019 | ||
+ | | 22/03/2019 | ||
+ | | CMS_AAA WAN Access | ||
+ | | file open error at RAL | ||
+ | | WLCG | ||
+ | | files retransfered | ||
+ | |- | ||
+ | | 139476 | ||
+ | | mice | ||
+ | | closed | ||
+ | | less urgent | ||
+ | | 01/02/2019 | ||
+ | | 22/03/2019 | ||
+ | | Other | ||
+ | | LFC dump for MICE VO | ||
+ | | EGI | ||
+ | | DB dump provided. | ||
+ | |- | ||
+ | | 139306 | ||
+ | | dteam | ||
+ | | closed | ||
+ | | less urgent | ||
+ | | 24/01/2019 | ||
+ | | 20/03/2019 | ||
+ | | Monitoring | ||
+ | | perfsonar hosts need updating | ||
+ | | EGI | ||
+ | | Hosts updated and additional information added. Thanks! | ||
+ | |- | ||
+ | | 138500 | ||
| cms | | cms | ||
| closed | | closed | ||
| urgent | | urgent | ||
− | | | + | | 26/11/2018 |
− | | | + | | 22/03/2019 |
| CMS_Data Transfers | | CMS_Data Transfers | ||
− | | | + | | Transfers failing from T2_PL_Swierk to RAL |
| WLCG | | WLCG | ||
+ | | block size changed to default | ||
+ | |- | ||
+ | | 138033 | ||
+ | | atlas | ||
+ | | solved | ||
+ | | urgent | ||
+ | | 01/11/2018 | ||
+ | | 26/03/2019 | ||
+ | | Other | ||
+ | | singularity jobs failing at RAL | ||
+ | | EGI | ||
+ | | As the new batch containers have been fully rolled out and appear to have been working since the 18/03/19 I'm going to call this one as solved. | ||
|} | |} | ||
|}<!-- **********************End Availability Report************************** -----> | |}<!-- **********************End Availability Report************************** -----> | ||
Line 350: | Line 434: | ||
! Comments | ! Comments | ||
|- | |- | ||
− | | 2019-03- | + | | 2019-03-19 |
+ | | 100 | ||
+ | | 100 | ||
+ | | 100 | ||
| 100 | | 100 | ||
| 100 | | 100 | ||
| style="background-color: orange;" | 93 | | style="background-color: orange;" | 93 | ||
+ | | | ||
+ | |- | ||
+ | | 2019-03-20 | ||
+ | | 100 | ||
+ | | 100 | ||
+ | | 100 | ||
+ | | 100 | ||
+ | | 100 | ||
| 100 | | 100 | ||
− | |||
− | |||
| | | | ||
|- | |- | ||
− | | 2019-03- | + | | 2019-03-21 |
+ | | 100 | ||
| 100 | | 100 | ||
| 100 | | 100 | ||
Line 365: | Line 459: | ||
| 100 | | 100 | ||
| 100 | | 100 | ||
− | |||
| | | | ||
|- | |- | ||
− | | 2019-03- | + | | 2019-03-22 |
+ | | 100 | ||
| 100 | | 100 | ||
| 100 | | 100 | ||
Line 374: | Line 468: | ||
| 100 | | 100 | ||
| 100 | | 100 | ||
− | |||
| | | | ||
|- | |- | ||
− | | 2019-03- | + | | 2019-03-23 |
+ | | style="background-color: red;" | 81 | ||
+ | | style="background-color: red;" | 81 | ||
| 100 | | 100 | ||
| 100 | | 100 | ||
− | |||
| 100 | | 100 | ||
| 100 | | 100 | ||
− | |||
| | | | ||
|- | |- | ||
− | | 2019-03- | + | | 2019-03-24 |
+ | | 100 | ||
| 100 | | 100 | ||
| 100 | | 100 | ||
Line 392: | Line 486: | ||
| 100 | | 100 | ||
| 100 | | 100 | ||
− | |||
| | | | ||
|- | |- | ||
− | | 2019-03- | + | | 2019-03-25 |
| 100 | | 100 | ||
| 100 | | 100 | ||
Line 404: | Line 497: | ||
| | | | ||
|- | |- | ||
− | | 2019-03- | + | | 2019-03-26 |
+ | | 100 | ||
+ | | 100 | ||
+ | | 100 | ||
| 100 | | 100 | ||
| 100 | | 100 | ||
| 100 | | 100 | ||
− | |||
− | |||
− | |||
| | | | ||
|} | |} | ||
Line 423: | Line 516: | ||
{| border=1 align=center | {| border=1 align=center | ||
| Target Availability for each site is 97.0% | | Target Availability for each site is 97.0% | ||
− | | style="background-color: | + | | style="background-color: red;" | Red <90% |
| style="background-color: orange;" | Orange <97% | | style="background-color: orange;" | Orange <97% | ||
|} | |} | ||
Line 430: | Line 523: | ||
! Day !! Atlas HC !! CMS HC !! Comment | ! Day !! Atlas HC !! CMS HC !! Comment | ||
|- | |- | ||
− | | 2019-03- | + | | 2019-03-19 || style="background-color: orange;" | 96 || 99 || |
+ | |- | ||
+ | | 2019-03-20 || 100 || 99 || | ||
|- | |- | ||
− | | 2019-03- | + | | 2019-03-21 || 100 || N/A || |
|- | |- | ||
− | | 2019-03- | + | | 2019-03-22 || 100 || N/A || |
|- | |- | ||
− | | 2019-03- | + | | 2019-03-23 || 100 || 100 || |
|- | |- | ||
− | | 2019-03- | + | | 2019-03-24 || 100 || 100 || |
|- | |- | ||
− | | 2019-03- | + | | 2019-03-25 || 100 || 99 || |
|- | |- | ||
− | | 2019-03- | + | | 2019-03-26 || 100 || 99 || |
|- | |- | ||
|} | |} |
Latest revision as of 09:51, 27 March 2019
RAL Tier1 Operations Report for 26th March 2019
Review of Issues during the week 18th March 2019 to the 26th March 2019. |
- Worker nodes on the batch farm had been kernel patched and rebooted. The Viglen 2011 tranche of machines not be returned to production (most of them are broken).
- Tier-1 continues to be involved in the on-going Security challenge - we are now in the evaluation phase.
- SAM tests are frequently not reporting periods of monitoring.
Current operational status and issues |
Resolved Castor Disk Server Issues |
Machine | VO | DiskPool | dxtx | Comments |
---|---|---|---|---|
- | - | - | - | - |
Ongoing Castor Disk Server Issues |
Machine | VO | DiskPool | dxtx | Comments |
---|---|---|---|---|
gdss733 | LHCb | lhcbDst | d1t0 | - |
Limits on concurrent batch system jobs. |
- ALICE - 1000
Notable Changes made since the last meeting. |
- NTR
Entries in GOC DB starting since the last report. |
Service | ID | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|---|
- | - | - | - | - | - | - | - |
Declared in the GOC DB |
Service | ID | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|---|
- | - | - | - | - | - | - | - |
- No ongoing downtime
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- DNS servers will be rolled out within the Tier1 network.
Open
GGUS Tickets (Snapshot taken during morning of the meeting). |
Request id | Affected vo | Status | Priority | Date of creation | Last update | Type of problem | Subject | Scope | Solution |
---|---|---|---|---|---|---|---|---|---|
140385 | cms | in progress | less urgent | 25/03/2019 | 26/03/2019 | CMS_Data Transfers | Data Transfer problems at T1_UK_RAL | WLCG | |
140220 | mice | in progress | less urgent | 15/03/2019 | 25/03/2019 | Other | mice LFC to DFC transition | EGI | |
139672 | other | in progress | urgent | 13/02/2019 | 18/03/2019 | Middleware | No LIGO pilots running at RAL | EGI | |
138665 | mice | on hold | urgent | 04/12/2018 | 30/01/2019 | Middleware | Problem accessing LFC at RAL | EGI |
GGUS Tickets Closed Last week |
Request id | Affected vo | Status | Priority | Date of creation | Last update | Type of problem | Subject | Scope | Solution |
---|---|---|---|---|---|---|---|---|---|
140283 | atlas | solved | less urgent | 19/03/2019 | 20/03/2019 | File Transfer | UK RAL-LCG2: DESTINATION SRM_PUT_TURL error | WLCG | Corrected ownership of the CASTOR dir to atlas001:atlas |
140278 | cms | solved | urgent | 19/03/2019 | 22/03/2019 | CMS_Data Transfers | Transfers failing from FNAL_Buffer to RAL_Disk | WLCG | we had a problem with ECHO mistakenly returning the correct size and checksum for files which failed transfers and were corrupt which resulted in them being marked as good by PhEDEx and then lying latent until they were accessed or transferred out.
Transfers recovered. |
140210 | atlas | solved | top priority | 14/03/2019 | 19/03/2019 | File Transfer | Cannot access some files | WLCG | These files were lost from RAL-LCG2-ECHO_DATADISK, probably due to an FTS bug which should now be fixed. They were identified here because they were needed by the production team. The ticket submitter has now marked them lost in Rucio.
Independently, we have been analysing the results of the Rucio consistency checker. It identified 163 files lost from this disk. The 5 files listed in this ticket are among the 163. We will mark the remaining files lost in Rucio. |
140177 | cms | solved | urgent | 13/03/2019 | 18/03/2019 | CMS_Data Transfers | RAL FTS - Transfers failing from T1_US_FNAL_Disk to some sites | WLCG | IPv6 was registered, although not fully enabled at FNAL. RAL had IPv6 fully enabled, so expected to transfer via that connection, however it was timing out and not falling back to IPv4. FNAL removed the IPv6 address, and since then the link has been green. |
140082 | atlas | closed | less urgent | 06/03/2019 | 21/03/2019 | Other | RAL-LCG2 squid service degraded | WLCG | squid01 node is back serving requests.
There was an overnight problem with the underlying storage for this VM node. |
139990 | cms | solved | urgent | 01/03/2019 | 20/03/2019 | CMS_AAA WAN Access | T1_UK_RAL xrootd segfaulted | WLCG | The issue is intermittent and occasional. This segfault is most likely a known problem, which will be addressed in xrootd version 4.9, along with other fixes. RAL will install this when available. |
139983 | t2k.org | closed | less urgent | 28/02/2019 | 21/03/2019 | File Access | Failed to bring files online at RAL-tape | EGI | See above. |
139858 | cms | closed | urgent | 23/02/2019 | 18/03/2019 | CMS_Data Transfers | Failing transfers of 7 files from T1_UK_RAL_Disk to T2_FR_GRIF_LLR | WLCG | Missing chunks, so deleted the files and they are now retransferred. |
139639 | cms | closed | very urgent | 12/02/2019 | 22/03/2019 | CMS_AAA WAN Access | file open error at RAL | WLCG | files retransfered |
139476 | mice | closed | less urgent | 01/02/2019 | 22/03/2019 | Other | LFC dump for MICE VO | EGI | DB dump provided. |
139306 | dteam | closed | less urgent | 24/01/2019 | 20/03/2019 | Monitoring | perfsonar hosts need updating | EGI | Hosts updated and additional information added. Thanks! |
138500 | cms | closed | urgent | 26/11/2018 | 22/03/2019 | CMS_Data Transfers | Transfers failing from T2_PL_Swierk to RAL | WLCG | block size changed to default |
138033 | atlas | solved | urgent | 01/11/2018 | 26/03/2019 | Other | singularity jobs failing at RAL | EGI | As the new batch containers have been fully rolled out and appear to have been working since the 18/03/19 I'm going to call this one as solved. |
Availability Report |
Day | Atlas | Atlas-Echo | CMS | LHCB | Alice | OPS | Comments |
---|---|---|---|---|---|---|---|
2019-03-19 | 100 | 100 | 100 | 100 | 100 | 93 | |
2019-03-20 | 100 | 100 | 100 | 100 | 100 | 100 | |
2019-03-21 | 100 | 100 | 100 | 100 | 100 | 100 | |
2019-03-22 | 100 | 100 | 100 | 100 | 100 | 100 | |
2019-03-23 | 81 | 81 | 100 | 100 | 100 | 100 | |
2019-03-24 | 100 | 100 | 100 | 100 | 100 | 100 | |
2019-03-25 | 100 | 100 | 100 | 100 | 100 | 100 | |
2019-03-26 | 100 | 100 | 100 | 100 | 100 | 100 |
Hammercloud Test Report |
Target Availability for each site is 97.0% | Red <90% | Orange <97% |
Day | Atlas HC | CMS HC | Comment |
---|---|---|---|
2019-03-19 | 96 | 99 | |
2019-03-20 | 100 | 99 | |
2019-03-21 | 100 | N/A | |
2019-03-22 | 100 | N/A | |
2019-03-23 | 100 | 100 | |
2019-03-24 | 100 | 100 | |
2019-03-25 | 100 | 99 | |
2019-03-26 | 100 | 99 |
Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud
Notes from Meeting. |