Difference between revisions of "Tier1 Operations Report 2014-11-05"
From GridPP Wiki
m (→) |
(→) |
||
Line 180: | Line 180: | ||
|- | |- | ||
| 108944 | | 108944 | ||
− | | | + | | Yellow |
| Urgent | | Urgent | ||
| On Hold | | On Hold |
Revision as of 11:06, 5 November 2014
RAL Tier1 Operations Report for 5th November 2014
Review of Issues during the week 29th October to 5th November 2014. |
- A problem reported last week, with a user filling up the spool areas on the ARC CEs has been fixed. The user was contacted (and responded) and in addition an automated process that cleans up the large log files put in place.
- There have been problems with three out of the four the ARC CEs since the end of last week. The process of tidying up and reporting completed jobs took a long time. ARC-CE01 has been declared down and was drained and re-installed last Friday (24th Oct). ARC-CE02 and ARC-CE03 were then drained over the weekend and re-installed on Monday (27th Oct).
Resolved Disk Server Issues |
- GDSS673 (lhcbRawRdst - D0T1) failed around 7am on Sunday morning (26th Oct). A faulty disk drive was found. The server was checked out and firmware updated. It was returned to service during the working day yesterday (Tuesday 28th). One file was declared lost from this machine.
- As reported last week: GDSS763 (AtlasDataDisk - D1T0) had failed for the third time on around a month. This system has been completely drained and is undergoing further investigations.
Current operational status and issues |
- None
Ongoing Disk Server Issues |
- None
Notable Changes made this last week. |
- Oracle patching is ongoing with the pluto standby database patched on Monday (27th). The pluto primary is being patched today (29th).
- Increased the number of jobs Alice is allowed to run to 3500. (23rd October)
Declared in the GOC DB |
None
Advanced warning for other interventions |
The following items are being discussed and are still to be formally scheduled and announced. |
- The Oracle regular "PSU" patches will be applied to the Pluto Castor standby database system on Monday (27th Oct) and to the Pluto production database on Wednesday (29th Oct). This is expected to be a transparent intervention. The 'Pluto' database hosts the Castor Nameserver as well as the CMS and LHCb stager databases.
- The rollout of the RIP protocol to the Tier1 routers still has to be completed.
- First quarter 2015: Circuit testing of the remaining (i.e. non-UPS) circuits in the machine room.
Listing by category:
- Databases:
- Apply latest Oracle patches (PSU) to the production database systems (Castor, LFC). (Underway).
- A new database (Oracle RAC) has been set-up to host the Atlas3D database. This is updated from CERN via Oracle GoldenGate.
- Switch LFC/3D to new Database Infrastructure.
- Castor:
- Update Castor headnodes to SL6.
- Fix discrepancies were found in some of the Castor database tables and columns. (The issue has no operational impact.)
- Networking:
- Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
- Make routing changes to allow the removal of the UKLight Router.
- Enable the RIP protocol for updating routing tables on the Tier1 routers.
- Fabric
- Migration of data to new T10KD tapes. (Migration of CMS from 'B' to 'D' tapes underway; migration of GEN from 'A' to 'D' tapes to follow.)
- Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
- There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room (Expected first quarter 2015).
Entries in GOC DB starting between the 29th October and 5th November 2014. |
Service | Scheduled? | Outage/At Risk | Start | End | Duration | Reason |
---|---|---|---|---|---|---|
Whole Site. | UNSCHEDULED | WARNING | 05/11/2014 09:00 | 05/11/2014 10:00 | 1 hour | Putting site At Risk for a reboot of network router. Anticipate only two very short (few seconds) break in connectivity. |
Open GGUS Tickets (Snapshot during morning of meeting) |
GGUS ID | Level | Urgency | State | Creation | Last Update | VO | Subject |
---|---|---|---|---|---|---|---|
109845 | Green | Less Urgent | Waiting for Reply | 2014-11-04 | 2014-11-04 | egi.eu CVMFS repository and GridPP wiki | |
109712 | Green | Urgent | In Progress | 2014-10-29 | 2014-10-29 | CMS | Glexec exited with status 203; ... |
109694 | Green | Urgent | In Progress | 2014-11-03 | 2014-11-04 | SNO+ | gfal-copy failing for files at RAL |
109276 | Green | Urgent | On Hold | 2014-10-11 | 2014-11-03 | CMS | Submissions to RAL FTS3 REST interface are failing for some users |
108944 | Yellow | Urgent | On Hold | 2014-10-01 | 2014-11-03 | CMS | AAA access test failing at T1_UK_RAL |
107935 | Red | Less Urgent | On Hold | 2014-08-27 | 2014-11-03 | Atlas | BDII vs SRM inconsistent storage capacity numbers |
107880 | Red | Less Urgent | In Progress | 2014-08-26 | 2014-10-29 | SNO+ | srmcp failure |
106324 | Red | Urgent | On Hold | 2014-06-18 | 2014-10-13 | CMS | pilots losing network connections at T1_UK_RAL |
Availability Report |
Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud
Day | OPS | Alice | Atlas | CMS | LHCb | Atlas HC | CMS HC | Comment |
---|---|---|---|---|---|---|---|---|
29/10/14 | 100 | 100 | 97.7 | 100 | 100 | 99 | 100 | Two SRM Test failures (both timeouts) |
30/10/14 | 100 | 100 | 100 | 95.9 | 95.9 | 99 | 99 | Both CMS and LHCb had a single SRM test failure just after midnight. |
31/10/14 | 100 | 100 | 100 | 100 | 100 | 100 | 99 | |
01/11/14 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
02/11/14 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | |
03/11/14 | 100 | 100 | 98.3 | 100 | 100 | 100 | 100 | |
04/11/14 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | Two consecutive SRM PUT test failures. |