Difference between revisions of "Tier1 Operations Report 2014-12-17"
From GridPP Wiki
(→) |
m (→) |
||
(16 intermediate revisions by one user not shown) | |||
Line 2: | Line 2: | ||
__NOTOC__ | __NOTOC__ | ||
====== ====== | ====== ====== | ||
+ | |||
+ | *The Tier1's plans for the Christmas and New Year holiday can be seen on our [http://www.gridpp.rl.ac.uk/blog/2014/12/17/ral-tier1-plans-for-christmas-new-year-holiday-3/ blog].* | ||
+ | |||
<!-- ************************************************************* -----> | <!-- ************************************************************* -----> | ||
Line 9: | Line 12: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 10th to 17th December 2014. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 10th to 17th December 2014. | ||
|} | |} | ||
− | * | + | * On Saturday (13th Dec) network problems severely affected Tier1 services. A network switch was found to have have problems coincident with the service issues. A member of staff came on site and resolved the switch probem. However, this turned out not to be the principle underlying cause of the service problems which were then traced to a DNS server that was not responding. Systems were re-configured not to use that as the primary DNS server. In parallel with a member of Networking staff attending on site to fix the DNS server. The problems lasted from around 07:00 to 22:00 that day. |
− | + | ||
<!-- ***********End Review of Issues during last week*********** -----> | <!-- ***********End Review of Issues during last week*********** -----> | ||
<!-- *********************************************************** -----> | <!-- *********************************************************** -----> | ||
Line 21: | Line 23: | ||
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | | style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues | ||
|} | |} | ||
− | * GDSS778(LHCbDst D1T0) failed in the early hours of Monday 15th December. | + | * GDSS778 (LHCbDst D1T0) failed in the early hours of Monday 15th December. Tests revealed faulty RAM which was replaced. The server was returned to production around 09:15 this morning (Wed. 17th Dec). |
<!-- ***********End Resolved Disk Server Issues*********** -----> | <!-- ***********End Resolved Disk Server Issues*********** -----> | ||
<!-- ***************************************************** -----> | <!-- ***************************************************** -----> | ||
Line 54: | Line 56: | ||
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week. | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week. | ||
|} | |} | ||
− | * | + | * On Tuesday morning (16th Dec) the firmware in the Tier1 router pair was updated to the latest production version. This is ahead of a patch to be applied in the New Year that should fix the ongoing RIP protocol problem. |
− | * | + | * Following a restriction on numbers of CMS batch jobs imposed during problems a week or so ago the CMS jobs limits on the farm have been progressively increased. |
− | + | ||
<!-- *************End Notable Changes made this last week************** -----> | <!-- *************End Notable Changes made this last week************** -----> | ||
<!-- ****************************************************************** -----> | <!-- ****************************************************************** -----> | ||
Line 81: | Line 82: | ||
|} | |} | ||
<!-- ******* still to be formally scheduled and/or announced ******* -----> | <!-- ******* still to be formally scheduled and/or announced ******* -----> | ||
− | * The rollout of the RIP protocol to the Tier1 routers still has to be completed. | + | * The rollout of the RIP protocol to the Tier1 routers still has to be completed. A software patch from the vendors will be applied to the Tier1 Routers on Tuesday 6th January. |
− | * | + | * The next quarterly UPS/Generator load test will take place on Wednesday 7th January. |
+ | * Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room: Tue-Thu 13-15 January & Tue-Thu 20-22 January. There are some systms that need to be re-powered in preparation for this work. | ||
* Completing Castor headnode upgrades to SL6: Tuesday 6th Jan - GEN; Wednesday 7th Jan - Nameserver (transparent - at risk) | * Completing Castor headnode upgrades to SL6: Tuesday 6th Jan - GEN; Wednesday 7th Jan - Nameserver (transparent - at risk) | ||
'''Listing by category:''' | '''Listing by category:''' | ||
Line 91: | Line 93: | ||
* Castor: | * Castor: | ||
** Update Castor headnodes to SL6 (ongoing). | ** Update Castor headnodes to SL6 (ongoing). | ||
+ | ** Update SRMs to new version (includes updating to SL6). | ||
** Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.) | ** Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.) | ||
** Update Castor to 2.1-14-latest, this depends on SL6 being deployed. | ** Update Castor to 2.1-14-latest, this depends on SL6 being deployed. | ||
Line 100: | Line 103: | ||
** Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes finished; migration of GEN from 'A' to 'D' tapes to follow.) | ** Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes finished; migration of GEN from 'A' to 'D' tapes to follow.) | ||
** Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC) | ** Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC) | ||
− | ** There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room | + | ** There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during January. |
<!-- ***************End Advanced warning for other interventions*************** -----> | <!-- ***************End Advanced warning for other interventions*************** -----> | ||
<!-- ************************************************************************** -----> | <!-- ************************************************************************** -----> | ||
Line 121: | Line 124: | ||
! Reason | ! Reason | ||
|- | |- | ||
− | | srm-atlas | + | | srm-atlas, srm-cms-disk, srm-cms, srm-lhcb |
− | | | + | | UNSCHEDULED |
| OUTAGE | | OUTAGE | ||
− | | | + | | 13/12/2014 14:30 |
− | | | + | | 13/12/2014 22:21 |
− | | | + | | 7 hours and 51 minutes |
− | | | + | | Correcting warning on SRMs to an Outage. |
|- | |- | ||
− | | srm- | + | | srm-atlas, srm-cms, srm-lhcb |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
| UNSCHEDULED | | UNSCHEDULED | ||
| WARNING | | WARNING | ||
− | | | + | | 13/12/2014 07:00 |
− | | | + | | 13/12/2014 22:21 |
− | | | + | | 15 hours and 21 minutes |
− | | | + | | Castor instances under investigation |
|- | |- | ||
| srm-atlas | | srm-atlas | ||
− | | | + | | SCHEDULED |
| OUTAGE | | OUTAGE | ||
− | | | + | | 10/12/2014 10:00 |
− | | | + | | 10/12/2014 11:43 |
− | + | | 1 hour and 43 minutes | |
− | + | | OS upgrade (SL6) on headnodes for Atlas Castor instance. | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | | | + | |
− | | | + | |
|} | |} | ||
<!-- **********************End GOC DB Entries************************** -----> | <!-- **********************End GOC DB Entries************************** -----> | ||
Line 175: | Line 162: | ||
|-style="background:#b7f1ce" | |-style="background:#b7f1ce" | ||
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject | ! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject | ||
+ | |- | ||
+ | | 110776 | ||
+ | | Green | ||
+ | | Urgent | ||
+ | | Waiting for Reply | ||
+ | | 2014-12-15 | ||
+ | | 2014-12-17 | ||
+ | | CMS | ||
+ | | Phedex Node Name Transition | ||
|- | |- | ||
| 110605 | | 110605 | ||
Line 181: | Line 177: | ||
| In Progress | | In Progress | ||
| 2014-12-08 | | 2014-12-08 | ||
− | | 2014-12- | + | | 2014-12-12 |
| ops | | ops | ||
| [Rod Dashboard] Issues detected at RAL-LCG2 (srm-cms-disk.gridpp.rl.ac.uk) | | [Rod Dashboard] Issues detected at RAL-LCG2 (srm-cms-disk.gridpp.rl.ac.uk) | ||
Line 190: | Line 186: | ||
| In Progress | | In Progress | ||
| 2014-11-26 | | 2014-11-26 | ||
− | | 2014-12- | + | | 2014-12-15 |
| N/A | | N/A | ||
| RAL-LCG2: please reinstall your perfsonar hosts(s) | | RAL-LCG2: please reinstall your perfsonar hosts(s) | ||
|- | |- | ||
| 109712 | | 109712 | ||
− | | | + | | Amber |
| Urgent | | Urgent | ||
| In Progress | | In Progress | ||
Line 208: | Line 204: | ||
| On hold | | On hold | ||
| 2014-11-03 | | 2014-11-03 | ||
− | | 2014-12- | + | | 2014-12-15 |
| SNO+ | | SNO+ | ||
| gfal-copy failing for files at RAL | | gfal-copy failing for files at RAL | ||
Line 226: | Line 222: | ||
| On Hold | | On Hold | ||
| 2014-08-27 | | 2014-08-27 | ||
− | | 2014- | + | | 2014-12-15 |
| Atlas | | Atlas | ||
| BDII vs SRM inconsistent storage capacity numbers | | BDII vs SRM inconsistent storage capacity numbers | ||
Line 233: | Line 229: | ||
| Red | | Red | ||
| Urgent | | Urgent | ||
− | | | + | | In Progress |
| 2014-06-18 | | 2014-06-18 | ||
− | | 2014- | + | | 2014-12-12 |
| CMS | | CMS | ||
| pilots losing network connections at T1_UK_RAL | | pilots losing network connections at T1_UK_RAL | ||
Line 257: | Line 253: | ||
! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas HC !! CMS HC !! Comment | ! Day !! OPS !! Alice !! Atlas !! CMS !! LHCb !! Atlas HC !! CMS HC !! Comment | ||
|- | |- | ||
− | | | + | | 10/12/14 || 100 || 100 || style="background-color: lightgrey;" | 92.8 || 100 || 100 || 99 || n/a || Upgrade of Atlas Castor headnodes to SL6. |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
|- | |- | ||
− | | 11/12/14 || 100 || 100 || 100 || 100 || 100 || 100 || | + | | 11/12/14 || 100 || 100 || 100 || 100 || 100 || 100 || n/a || |
|- | |- | ||
| 12/12/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || | | 12/12/14 || 100 || 100 || 100 || 100 || 100 || 100 || 100 || | ||
|- | |- | ||
− | | 13/12/14 || | + | | 13/12/14 || style="background-color: lightgrey;" | 71.1 || 100 || style="background-color: lightgrey;" | 31.9 || style="background-color: lightgrey;" | 35.3 || style="background-color: lightgrey;" | 33.6 || 90 || 100 || Problems with a DNS server. |
|- | |- | ||
− | | 14/12/14 || 100 || 100 || 100 || 100 || 100 || | + | | 14/12/14 || 100 || 100 || 100 || 100 || 100 || 97 || 100 || |
|- | |- | ||
− | | 15/12/14 || 100 || 100 || | + | | 15/12/14 || 100 || 100 || style="background-color: lightgrey;" | 99.0 || 100 || style="background-color: lightgrey;" | 95.8 || 100 || n/a || Singe SRM Test failure in each case. |
|- | |- | ||
− | | 16/12/14 || 100 || 100 || 100 || 100 || 100 || | + | | 16/12/14 || 100 || 100 || 100 || 100 || 100 || 97 || 100 || |
|} | |} | ||
<!-- **********************End Availability Report************************** -----> | <!-- **********************End Availability Report************************** -----> | ||
<!-- *********************************************************************** -----> | <!-- *********************************************************************** -----> |
Latest revision as of 14:50, 17 December 2014
RAL Tier1 Operations Report for 17th December 2014
- The Tier1's plans for the Christmas and New Year holiday can be seen on our blog.*
Review of Issues during the week 10th to 17th December 2014. |
- On Saturday (13th Dec) network problems severely affected Tier1 services. A network switch was found to have have problems coincident with the service issues. A member of staff came on site and resolved the switch probem. However, this turned out not to be the principle underlying cause of the service problems which were then traced to a DNS server that was not responding. Systems were re-configured not to use that as the primary DNS server. In parallel with a member of Networking staff attending on site to fix the DNS server. The problems lasted from around 07:00 to 22:00 that day.
Resolved Disk Server Issues |
- GDSS778 (LHCbDst D1T0) failed in the early hours of Monday 15th December. Tests revealed faulty RAM which was replaced. The server was returned to production around 09:15 this morning (Wed. 17th Dec).
Current operational status and issues |
- None
Ongoing Disk Server Issues |
- None.
Notable Changes made this last week. |
- On Tuesday morning (16th Dec) the firmware in the Tier1 router pair was updated to the latest production version. This is ahead of a patch to be applied in the New Year that should fix the ongoing RIP protocol problem.
- Following a restriction on numbers of CMS batch jobs imposed during problems a week or so ago the CMS jobs limits on the farm have been progressively increased.
Declared in the GOC DB |
- None.
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
- The rollout of the RIP protocol to the Tier1 routers still has to be completed. A software patch from the vendors will be applied to the Tier1 Routers on Tuesday 6th January.
- The next quarterly UPS/Generator load test will take place on Wednesday 7th January.
- Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room: Tue-Thu 13-15 January & Tue-Thu 20-22 January. There are some systms that need to be re-powered in preparation for this work.
- Completing Castor headnode upgrades to SL6: Tuesday 6th Jan - GEN; Wednesday 7th Jan - Nameserver (transparent - at risk)
Listing by category:
- Databases:
- A new database (Oracle RAC) has been set-up to host the Atlas 3D database. This is updated from CERN via Oracle GoldenGate. This system is yet to be brought into use. (Currently Atlas 3D/Frontier still uses the OGMA datase system, although this was also changed to update from CERN using Oracle Golden Gate.)
- Switch LFC/3D to new Database Infrastructure.
- Update to Oracle 11.2.0.4
- Castor:
- Update Castor headnodes to SL6 (ongoing).
- Update SRMs to new version (includes updating to SL6).
- Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
- Update Castor to 2.1-14-latest, this depends on SL6 being deployed.
- Networking:
- Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
- Make routing changes to allow the removal of the UKLight Router.
- Enable the RIP protocol for updating routing tables on the Tier1 routers.
- Fabric
- Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes finished; migration of GEN from 'A' to 'D' tapes to follow.)
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
- There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during January.
Entries in GOC DB starting between the 10th and 17th December 2014. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
srm-atlas, srm-cms-disk, srm-cms, srm-lhcb | UNSCHEDULED | OUTAGE | 13/12/2014 14:30 | 13/12/2014 22:21 | 7 hours and 51 minutes | Correcting warning on SRMs to an Outage. |
srm-atlas, srm-cms, srm-lhcb | UNSCHEDULED | WARNING | 13/12/2014 07:00 | 13/12/2014 22:21 | 15 hours and 21 minutes | Castor instances under investigation |
srm-atlas | SCHEDULED | OUTAGE | 10/12/2014 10:00 | 10/12/2014 11:43 | 1 hour and 43 minutes | OS upgrade (SL6) on headnodes for Atlas Castor instance. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
110776 | Green | Urgent | Waiting for Reply | 2014-12-15 | 2014-12-17 | CMS | Phedex Node Name Transition |
110605 | Green | Less Urgent | In Progress | 2014-12-08 | 2014-12-12 | ops | [Rod Dashboard] Issues detected at RAL-LCG2 (srm-cms-disk.gridpp.rl.ac.uk) |
110382 | Green | Less Urgent | In Progress | 2014-11-26 | 2014-12-15 | N/A | RAL-LCG2: please reinstall your perfsonar hosts(s) |
109712 | Amber | Urgent | In Progress | 2014-10-29 | 2014-11-27 | CMS | Glexec exited with status 203; ... |
109694 | Yellow | Urgent | On hold | 2014-11-03 | 2014-12-15 | SNO+ | gfal-copy failing for files at RAL |
108944 | Red | Urgent | In Progress | 2014-10-01 | 2014-12-09 | CMS | AAA access test failing at T1_UK_RAL |
107935 | Red | Less Urgent | On Hold | 2014-08-27 | 2014-12-15 | Atlas | BDII vs SRM inconsistent storage capacity numbers |
106324 | Red | Urgent | In Progress | 2014-06-18 | 2014-12-12 | CMS | pilots losing network connections at T1_UK_RAL |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
10/12/14 | 100 | 100 | 92.8 | 100 | 100 | 99 | n/a | Upgrade of Atlas Castor headnodes to SL6. |
11/12/14 | 100 | 100 | 100 | 100 | 100 | 100 | n/a | |
12/12/14 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
13/12/14 | 71.1 | 100 | 31.9 | 35.3 | 33.6 | 90 | 100 | Problems with a DNS server. |
14/12/14 | 100 | 100 | 100 | 100 | 100 | 97 | 100 | |
15/12/14 | 100 | 100 | 99.0 | 100 | 95.8 | 100 | n/a | Singe SRM Test failure in each case. |
16/12/14 | 100 | 100 | 100 | 100 | 100 | 97 | 100 |