Difference between revisions of "Tier1 Operations Report 2019-04-01"
(→) |
(→) |
||
Line 285: | Line 285: | ||
! Solution | ! Solution | ||
|- | |- | ||
− | | | + | | 140493 |
| atlas | | atlas | ||
| solved | | solved | ||
| less urgent | | less urgent | ||
− | | | + | | 29/03/2019 |
− | | | + | | 29/03/2019 |
| File Transfer | | File Transfer | ||
− | | UK RAL-LCG2: | + | | UK RAL-LCG2 MCTAPE: transfer error with" Connection timed out," |
| WLCG | | WLCG | ||
− | | | + | | Hi xin wang, |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | This looks more like a problem with the source site, LRZ-LMU_DATADISK. The error message refers to the source URL at httpg://lcg-lrz-srm.grid.lrz.de:8443/srm/managerv2. Also, if I try to download from the source path, it gets stuck: | |
− | + | ||
− | + | gfal-copy srm://lcg-lrz-srm.grid.lrz.de:8443/srm/managerv2?SFN=/pnfs/lrz-muenchen.de/data/atlas/dq2/atlasdatadisk/rucio/mc16_13TeV/35/cb/HITS.17137527._002530.pool.root.1 . | |
− | + | ||
− | + | Can you assign a new ticket for LRZ-LMU? | |
− | + | ||
− | + | I hope it's OK for me to mark this ticket "solved". Please reopen if I was mistaken. | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | Thanks, | |
+ | Tim. | ||
|- | |- | ||
− | | | + | | 140467 |
| cms | | cms | ||
| solved | | solved | ||
| urgent | | urgent | ||
− | | | + | | 28/03/2019 |
− | | | + | | 01/04/2019 |
| CMS_Data Transfers | | CMS_Data Transfers | ||
− | | RAL | + | | Stuck file at RAL |
| WLCG | | WLCG | ||
− | | | + | | Stuck file had missing stripes and a zeroth stripe with zero size. This was deleted by hand and the errors stopped appearing. |
|- | |- | ||
− | | | + | | style="background-color: lightgreen;" | 140443 |
− | | | + | | none |
− | | | + | | verified |
− | | | + | | top priority |
− | | | + | | 27/03/2019 |
− | | | + | | 01/04/2019 |
| Other | | Other | ||
− | | | + | | This TEST ALARM has been raised for testing GGUS alarm work flow after a new GGUS release. |
| WLCG | | WLCG | ||
− | | | + | | Alarms raised (and acknowledged) internally at RAL |
− | + | Closing this ticket. | |
|- | |- | ||
− | | | + | | style="background-color: lightgreen;" | 140400 |
− | + | | ops | |
− | + | | verified | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | | | + | |
− | | | + | |
− | | | + | |
| less urgent | | less urgent | ||
− | | | + | | 26/03/2019 |
− | | | + | | 29/03/2019 |
− | | | + | | Operations |
− | | | + | | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer@gridftp.echo.stfc.ac.uk |
| EGI | | EGI | ||
− | | | + | | Tests are now being run with the correct certificate. |
|- | |- | ||
− | | | + | | 140385 |
| cms | | cms | ||
− | | | + | | solved |
− | | urgent | + | | less urgent |
− | | | + | | 25/03/2019 |
− | | | + | | 29/03/2019 |
| CMS_Data Transfers | | CMS_Data Transfers | ||
− | | | + | | Data Transfer problems at T1_UK_RAL |
| WLCG | | WLCG | ||
− | | | + | | The datasets listed had each one file with a problem. Two had zero size and were deleted from tape. Another seven were deleted from tape and invalidated by the data transfer team. The list of stuck routing files is now empty apart from a file from a T2 site currently experiencing problems. |
|- | |- | ||
− | | | + | | 140177 |
| cms | | cms | ||
| closed | | closed | ||
− | | | + | | urgent |
− | | | + | | 13/03/2019 |
− | | | + | | 01/04/2019 |
− | | | + | | CMS_Data Transfers |
− | | | + | | RAL FTS - Transfers failing from T1_US_FNAL_Disk to some sites |
| WLCG | | WLCG | ||
− | | | + | | IPv6 was registered, although not fully enabled at FNAL. RAL had IPv6 fully enabled, so expected to transfer via that connection, however it was timing out and not falling back to IPv4. FNAL removed the IPv6 address, and since then the link has been green. |
|- | |- | ||
− | | | + | | 139723 |
− | | | + | | atlas |
− | | | + | | solved |
| less urgent | | less urgent | ||
− | | | + | | 15/02/2019 |
− | | | + | | 27/03/2019 |
− | | | + | | Data Management - generic |
− | | | + | | permissions on scratchdisk |
| EGI | | EGI | ||
− | | | + | | Hi Folks, |
− | + | ||
− | + | This appears to have become a somewhat convoluted ticket which as far as I can tell has simply become a conversation thread. | |
− | + | ||
− | + | Assuming that the original issue has been resolved (further workarounds/enhancements not withstanding), I'm marking this as solved. | |
− | + | ||
− | + | Cheers | |
− | + | ||
− | + | Confused of Tier-1! (Darren) | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
|- | |- | ||
| 138033 | | 138033 |
Revision as of 09:23, 2 April 2019
RAL Tier1 Operations Report for 1st April 2019
Review of Issues during the week 26th March 2019 to the 1st April 2019. |
- This years CPU capacity entered production on 27th March. V11 CPU has been retired.
- gdss733 had a double disk failure in Castor. Was out of production from 27th - 29th March.
- CMS CPU efficiency has been poor all last week (25%-40%). Efficiency started dropping around the 20th March.
- We fixed a problem that prevent LSST jobs from running (HTCondor had not been updated with the new VOMS server).
Current operational status and issues |
Resolved Castor Disk Server Issues |
Machine | VO | DiskPool | dxtx | Comments |
---|---|---|---|---|
gdss733 | LHCb | lcbhDsy | d1t0 | - |
Ongoing Castor Disk Server Issues |
Machine | VO | DiskPool | dxtx | Comments |
---|---|---|---|---|
- |
Limits on concurrent batch system jobs. |
- ALICE - 1000
Notable Changes made since the last meeting. |
- NTR
Entries in GOC DB starting since the last report. |
Service | ID | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|---|
- | - | - | - | - | - | - | - |
Declared in the GOC DB |
Service | ID | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|---|
- | - | - | - | - | - | - | - |
- No ongoing downtime
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- DNS servers will be rolled out within the Tier1 network.
Open
GGUS Tickets (Snapshot taken during morning of the meeting). |
Request id | Affected vo | Status | Priority | Date of creation | Last update | Type of problem | Subject | Scope | Solution |
---|---|---|---|---|---|---|---|---|---|
140521 | ops | in progress | less urgent | 01/04/2019 | 01/04/2019 | Operations | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer@gridftp.echo.stfc.ac.uk | EGI | |
140511 | cms | in progress | urgent | 01/04/2019 | 01/04/2019 | CMS_Facilities | T1_UK_RAL SAM job run out of date | WLCG | |
140447 | dteam | in progress | less urgent | 27/03/2019 | 02/04/2019 | Network problem | packet loss outbound from RAL-LCG2 over IPv6 | EGI | |
140220 | mice | in progress | less urgent | 15/03/2019 | 25/03/2019 | Other | mice LFC to DFC transition | EGI | |
139672 | other | in progress | urgent | 13/02/2019 | 18/03/2019 | Middleware | No LIGO pilots running at RAL | EGI | |
138665 | mice | on hold | urgent | 04/12/2018 | 30/01/2019 | Middleware | Problem accessing LFC at RAL | EGI |
GGUS Tickets Closed Last week |
Request id | Affected vo | Status | Priority | Date of creation | Last update | Type of problem | Subject | Scope | Solution |
---|---|---|---|---|---|---|---|---|---|
140493 | atlas | solved | less urgent | 29/03/2019 | 29/03/2019 | File Transfer | UK RAL-LCG2 MCTAPE: transfer error with" Connection timed out," | WLCG | Hi xin wang,
This looks more like a problem with the source site, LRZ-LMU_DATADISK. The error message refers to the source URL at httpg://lcg-lrz-srm.grid.lrz.de:8443/srm/managerv2. Also, if I try to download from the source path, it gets stuck: gfal-copy srm://lcg-lrz-srm.grid.lrz.de:8443/srm/managerv2?SFN=/pnfs/lrz-muenchen.de/data/atlas/dq2/atlasdatadisk/rucio/mc16_13TeV/35/cb/HITS.17137527._002530.pool.root.1 . Can you assign a new ticket for LRZ-LMU? I hope it's OK for me to mark this ticket "solved". Please reopen if I was mistaken. Thanks, Tim. |
140467 | cms | solved | urgent | 28/03/2019 | 01/04/2019 | CMS_Data Transfers | Stuck file at RAL | WLCG | Stuck file had missing stripes and a zeroth stripe with zero size. This was deleted by hand and the errors stopped appearing. |
140443 | none | verified | top priority | 27/03/2019 | 01/04/2019 | Other | This TEST ALARM has been raised for testing GGUS alarm work flow after a new GGUS release. | WLCG | Alarms raised (and acknowledged) internally at RAL
Closing this ticket. |
140400 | ops | verified | less urgent | 26/03/2019 | 29/03/2019 | Operations | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer@gridftp.echo.stfc.ac.uk | EGI | Tests are now being run with the correct certificate. |
140385 | cms | solved | less urgent | 25/03/2019 | 29/03/2019 | CMS_Data Transfers | Data Transfer problems at T1_UK_RAL | WLCG | The datasets listed had each one file with a problem. Two had zero size and were deleted from tape. Another seven were deleted from tape and invalidated by the data transfer team. The list of stuck routing files is now empty apart from a file from a T2 site currently experiencing problems. |
140177 | cms | closed | urgent | 13/03/2019 | 01/04/2019 | CMS_Data Transfers | RAL FTS - Transfers failing from T1_US_FNAL_Disk to some sites | WLCG | IPv6 was registered, although not fully enabled at FNAL. RAL had IPv6 fully enabled, so expected to transfer via that connection, however it was timing out and not falling back to IPv4. FNAL removed the IPv6 address, and since then the link has been green. |
139723 | atlas | solved | less urgent | 15/02/2019 | 27/03/2019 | Data Management - generic | permissions on scratchdisk | EGI | Hi Folks,
This appears to have become a somewhat convoluted ticket which as far as I can tell has simply become a conversation thread. Assuming that the original issue has been resolved (further workarounds/enhancements not withstanding), I'm marking this as solved. Cheers Confused of Tier-1! (Darren) |
138033 | atlas | solved | urgent | 01/11/2018 | 26/03/2019 | Other | singularity jobs failing at RAL | EGI | As the new batch containers have been fully rolled out and appear to have been working since the 18/03/19 I'm going to call this one as solved. |
Availability Report |
Day | Atlas | Atlas-Echo | CMS | LHCB | Alice | OPS | Comments |
---|---|---|---|---|---|---|---|
2019-03-19 | 100 | 100 | 100 | 100 | 100 | 93 | |
2019-03-20 | 100 | 100 | 100 | 100 | 100 | 100 | |
2019-03-21 | 100 | 100 | 100 | 100 | 100 | 100 | |
2019-03-22 | 100 | 100 | 100 | 100 | 100 | 100 | |
2019-03-23 | 81 | 81 | 100 | 100 | 100 | 100 | |
2019-03-24 | 100 | 100 | 100 | 100 | 100 | 100 | |
2019-03-25 | 100 | 100 | 100 | 100 | 100 | 100 | |
2019-03-26 | 100 | 100 | 100 | 100 | 100 | 100 |
Hammercloud Test Report |
Target Availability for each site is 97.0% | Red <90% | Orange <97% |
Day | Atlas HC | CMS HC | Comment |
---|---|---|---|
2019-03-19 | 96 | 99 | |
2019-03-20 | 100 | 99 | |
2019-03-21 | 100 | N/A | |
2019-03-22 | 100 | N/A | |
2019-03-23 | 100 | 100 | |
2019-03-24 | 100 | 100 | |
2019-03-25 | 100 | 99 | |
2019-03-26 | 100 | 99 |
Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud
Notes from Meeting. |