Difference between revisions of "Tier1 Operations Report 2014-08-27"

From GridPP Wiki
Jump to: navigation, search
()
()
Line 34: Line 34:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
|}
 
|}
* There are ongoing problems with disk server draining in Castor (and specifically for Atlas) in Castor 2.1.4. This is under investigation.
 
 
* Discrepancies were found in some of the Castor database tables and columns. The Castor team are considering options with regard to fixing these. The issue has no operational impact.  
 
* Discrepancies were found in some of the Castor database tables and columns. The Castor team are considering options with regard to fixing these. The issue has no operational impact.  
 
* We are still investigating xroot access to CMS Castor following the upgrade on the 17th June. The service has improved but there may still be work to be done.
 
* We are still investigating xroot access to CMS Castor following the upgrade on the 17th June. The service has improved but there may still be work to be done.

Revision as of 10:37, 27 August 2014

RAL Tier1 Operations Report for 27th August 2014

Review of Issues during the fortnigh 13th to 27th August 2014.
  • Following some problems with disk server draining in Castor 2.1.14 a modified procedure has been tested on one disk server and been successful.
  • While the farm was quiet around teh 14th August the number of permitted Alice jobs was increased.
  • Late evening on Monday 18th Aug there was an (Atlas) Oracle database crash due to a known (and reported) bug. The database failed over to another node in the Oracle RAC. There were some restarts of the (Atlas) SRM processes as the failover occured and again as the databse was manually returned to its 'correct' node in the RAC an hour later.
  • There was load issues on the Atlas Scratch disk problems over the weekend (Sunday & Monday 24/25 August).
Resolved Disk Server Issues
  • None.
Current operational status and issues
  • Discrepancies were found in some of the Castor database tables and columns. The Castor team are considering options with regard to fixing these. The issue has no operational impact.
  • We are still investigating xroot access to CMS Castor following the upgrade on the 17th June. The service has improved but there may still be work to be done.
Ongoing Disk Server Issues
  • None
Notable Changes made this last week.
  • Last Wednesday (6th Aug) 8 disk servers (total around 900TB) were deployed to AtlasDataDisk and 5 disk servers (a total of 180TB, with the servers having 10Gbit interfaces) were added to the disk cache for CMSTape. (This is in addition to the 10 disk servers (over a Petabyte of capacity) deployed into lhcbDst and reported last week).
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
lcgfts.gridpp.rl.ac.uk, SCHEDULED OUTAGE 02/09/2014 11:00 02/10/2014 11:00 30 days, Service being decommissioned.
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • The rollout of the RIP protocol to the Tier1 routers still has to be completed.
  • We are planning the termination of the FTS2 service (announced for 2nd September) now that almost all use is on FTS3.
  • The removal of the (NFS) software server is scheduled for the 2nd September.
  • We are planning stop access to the cream CEs - although possibly leaving them available to ALICE for some time. No date has yet been specified for this.

Listing by category:

  • Databases:
    • Switch LFC/FTS/3D to new Database Infrastructure.
  • Castor:
    • None.
  • Networking:
    • Move switches connecting the 2011 disk servers batches onto the Tier1 mesh network.
    • Make routing changes to allow the removal of the UKLight Router.
    • Enable the RIP protocol for updating routing tables on the Tier1 routers. (Requires resolution of blocking issue).
  • Fabric
    • We are phasing out the use of the software server used by the small VOs.
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
    • There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during 2014.
Entries in GOC DB starting between the 13th and 27th August 2014.
Service Scheduled? Outage/At Risk Start End Duration Reason
lcgfts.gridpp.rl.ac.uk, SCHEDULED OUTAGE 02/09/2014 11:00 02/10/2014 11:00 30 days, Service being decommissioned.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
106324 Red Urgent In Progress 2014-06-18 2014-08-07 CMS pilots losing network connections at T1_UK_RAL
105405 Red Urgent On Hold 2014-05-14 2014-07-29 please check your Vidyo router firewall configuration
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
13/08/14 100 100 100 100 100 98 100
14/08/14 100 100 100 100 100 100 100
15/08/14 100 100 100 100 100 98 99
16/08/14 100 100 100 100 100 100 100
17/08/14 100 100 100 100 100 100 99
18/08/14 100 100 100 100 100 92 100
19/08/14 100 100 97.1 100 100 95 99
20/08/14 100 100 98.2 100 100 98 100
21/08/14 100 -100 95.8 100 100 77 92
22/08/14 100 100 99.0 100 100 33 99
23/08/14 100 100 96.4 100 100 31 100
24/08/14 100 100 100 100 100 86 100
25/08/14 100 100 90.2 100 100 58 100
26/08/14 100 100 100 100 100 100 91