Difference between revisions of "Tier1 Operations Report 2018-07-30"
(→) |
|||
Line 11: | Line 11: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 23rd July to the 30th July 2018. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 23rd July to the 30th July 2018. | ||
|} | |} | ||
− | * | + | * Every circuit breaker in Tier-1's R89 building was tested between Tuesday 24th - Thursday 26th July. This testing period was completed successfully. The majority of racks were powered by dual PDUs which were on different circuit breakers and therefore, theoretically, nothing should go down. 14 PDUs failed over the 3 day period. This was within the expected number of failures although it was at the highest end of our estimates. We had sufficient replacement PDUs. |
+ | Impact to Tier-1: | ||
+ | - 2 Production Castor disk servers went down with temporary lost of access to some file access. | ||
+ | - Castor transfer manager went down for ATLAS, lost of ability to schedule new transfers for ~20 minutes. | ||
+ | - A few SL5 machines also went down. No impact on service but a useful reminder to get them decommissioned properly. | ||
+ | - It should be worth noting that two Echo storage nodes went down with no impact on service. This happened because their power cables weren’t correctly fitted. | ||
+ | - GOCDB service was down for 45 minutes. It wasn’t directly a result of the power testing it was failed network switch (GOCDB is not on Tier-1 network, so this failure did not impact the Tier-1). The read only version (in Germany) stayed up. EGI broadcast was sent out a few minutes after the service went down. | ||
+ | |||
+ | |||
+ | |||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> |
Revision as of 11:59, 30 July 2018
RAL Tier1 Operations Report for 30th July 2018
Review of Issues during the week 23rd July to the 30th July 2018. |
- Every circuit breaker in Tier-1's R89 building was tested between Tuesday 24th - Thursday 26th July. This testing period was completed successfully. The majority of racks were powered by dual PDUs which were on different circuit breakers and therefore, theoretically, nothing should go down. 14 PDUs failed over the 3 day period. This was within the expected number of failures although it was at the highest end of our estimates. We had sufficient replacement PDUs.
Impact to Tier-1: - 2 Production Castor disk servers went down with temporary lost of access to some file access. - Castor transfer manager went down for ATLAS, lost of ability to schedule new transfers for ~20 minutes. - A few SL5 machines also went down. No impact on service but a useful reminder to get them decommissioned properly. - It should be worth noting that two Echo storage nodes went down with no impact on service. This happened because their power cables weren’t correctly fitted. - GOCDB service was down for 45 minutes. It wasn’t directly a result of the power testing it was failed network switch (GOCDB is not on Tier-1 network, so this failure did not impact the Tier-1). The read only version (in Germany) stayed up. EGI broadcast was sent out a few minutes after the service went down.
Current operational status and issues |
- None.
Resolved Castor Disk Server Issues |
- None
Ongoing Castor Disk Server Issues |
- None.
Limits on concurrent batch system jobs. |
- CMS Multicore 550
Notable Changes made since the last meeting. |
- None.
Entries in GOC DB starting since the last report. |
Declared in the GOC DB |
Service | ID | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|---|
Everything! | 25670 | SCHEDULED | WARNING | 24/07/2018 09:00 | 24/07/2018 16:30 | 1 days | Site in Warning during testing of electrical circuits in the computer room. Ticket the site as usual if you see problems. All services should continue working during these tests but there is an increased likelihood of problems. |
Everything! | 25671 | SCHEDULED | WARNING | 25/07/2018 09:00 | 25/07/2018 16:30 | 1 days | Site in Warning during testing of electrical circuits in the computer room. Ticket the site as usual if you see problems. All services should continue working during these tests but there is an increased likelihood of problems. |
Everything! | 25672 | SCHEDULED | WARNING | 26/07/2018 09:00 | 26/07/2018 16:30 | 1 days | Site in Warning during testing of electrical circuits in the computer room. Ticket the site as usual if you see problems. All services should continue working during these tests but there is an increased likelihood of problems. |
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
Listing by category:
- Castor:
- Update systems to use SL7 and configured by Quattor/Aquilon. (Tape servers done)
- Move to generic Castor headnodes.
- Internal
- DNS servers will be rolled out within the Tier1 network.
Open
GGUS Tickets (Snapshot taken during morning of the meeting). |
Request id | Affected vo | Status | Priority | Date of creation | Last update | Type of problem | Subject | Scope |
---|---|---|---|---|---|---|---|---|
136199 | lhcb | in progress | very urgent | 18/07/2018 | 23/07/2018 | File Transfer | Lots of submitted transfers on RAL FTS | WLCG |
136138 | t2k.org | in progress | urgent | 16/07/2018 | 23/07/2018 | File Access | Extremely long download times for T2K files on tape at RALL - Part 2 | EGI |
136028 | cms | in progress | top priority | 10/07/2018 | 24/07/2018 | CMS_AAA WAN Access | Issues reading files at T1_UK_RAL_Disk | WLCG |
134685 | dteam | in progress | less urgent | 23/04/2018 | 09/07/2018 | Middleware | please upgrade perfsonar host(s) at RAL-LCG2 to CentOS7 | EGI |
124876 | ops | in progress | less urgent | 07/11/2016 | 23/07/2018 | Operations | [Rod Dashboard] Issue detected : hr.srce.GridFTP-Transfer-ops@gridftp.echo.stfc.ac.uk | EGI |
GGUS Tickets Closed Last week |
Request id | Affected vo | Status | Priority | Date of creation | Last update | Type of problem | Subject | Scope |
---|---|---|---|---|---|---|---|---|
136229 | cms | solved | very urgent | 19/07/2018 | 19/07/2018 | Data Management - generic | RAL FTS Service not reachable via IPv6 | EGI |
136110 | atlas | solved | urgent | 13/07/2018 | 17/07/2018 | File Transfer | RAL-LCG2: Transfer errors as source with "SRM_FILE_UNAVAILABLE" | WLCG |
136097 | other | solved | urgent | 13/07/2018 | 19/07/2018 | Operations | Please restart frontier-squid on RAL cvmfs stratum 1 | EGI |
135940 | cms | closed | urgent | 04/07/2018 | 20/07/2018 | CMS_Data Transfers | Transfers failing to RAL_Disk - no data available | WLCG |
135822 | cms | closed | very urgent | 26/06/2018 | 19/07/2018 | CMS_Central Workflows | File Read Problems for Production at T1_UK_RAL | WLCG |
Availability Report |
Target Availability for each site is 97.0% | Red <90% | Orange <97% |
Day | Atlas | Atlas-Echo | CMS | LHCB | Alice | OPS | Comments |
---|---|---|---|---|---|---|---|
2018-07-16 | 100 | 100 | 100 | 100 | 100 | 100 | |
2018-07-17 | 100 | 100 | 95 | 100 | 100 | 100 | |
2018-07-18 | 100 | 100 | 92 | 100 | 100 | 100 | |
2018-07-19 | 94 | 100 | 93 | 100 | 100 | 100 | |
2018-07-20 | 100 | 100 | 93 | 100 | 100 | 100 | |
2018-07-21 | 100 | 100 | 96 | 100 | 100 | 100 | |
2018-07-22 | 100 | 100 | 98 | 100 | 100 | 100 | |
2018-07-23 | 100 | 100 | 100 | 100 | 100 | 100 |
Hammercloud Test Report |
Target Availability for each site is 97.0% | Red <90% | Orange <97% |
Day | Atlas HC | CMS HC | Comment |
---|---|---|---|
2018-07-16 | 94 | 99 | |
2018-07-17 | 89 | 99 | |
2018-07-18 | 77 | 98 | |
2018-07-19 | 93 | 99 | |
2018-07-20 | 83 | 97 | |
2018-07-21 | 97 | 100 | |
2018-07-22 | 93 | 100 | |
2018-07-23 | - | - |
Key: Atlas HC = Atlas HammerCloud (Queue RAL-LCG2_UCORE, Template 841); CMS HC = CMS HammerCloud
Notes from Meeting. |