Operations Bulletin Latest

From GridPP Wiki
Jump to: navigation, search

Bulletin archive


Week commencing 5th October 2015
Task Areas
General updates

Tuesday 6th October

  • GOCDB has received a new service type request for ‘uk.ac.gridpp.vcycle’.
  • John H noted a "CVMFS problem at RAL". Apparently this was due to a misconfiguration at CERN.
  • With HPC DIRAC work RAL discovered a bug in how the re-use connection flag is used with the fts commands.
  • Here is a summary for the September reports:
  • The UCL storage servers are no longer used by ATLAS.
  • The November GDB has been moved to 4th November.


Tuesday 29th September

  • Nagios was (and therefore the regional dashboard has) affected by a weekend A/C outage at Oxford.
  • Steve J reports on: Condor libglobus_common problems
  • There was an EGI OMB on 24th September. Agenda.
  • Notes from the Monday biweekly WLCG ops meeting are available for anyone who is interested in the latest ops news.
  • On the topic 'Perfsonar Bandwidth checks not running' Duncan reported a move to a full WLCG mesh.
  • Tom would appreciate feedback on the GridPP website v2.
  • Steve Lloyd has setup a new metrics page as a basis for allocating T2 hardware funding. This just uses total Disk and total Elapsed and/or CPU time. In the PMB yesterday it was agreed that Elapsed time would be used, but the results of various combinations will be watched and assessed over the coming months. One overriding reason for using Elapsed time is that CPU is not provided by all cloud implementations.


WLCG Operations Coordination - Agendas

Tuesday 6th October

  • There was a WLCG ops coordination meeting last week. Minutes. Agenda (which has John Gordon's accounting slides).
  • The highlights:
    • dCache sites should install the latest fix for SRM solving a vulnerability
    • All sites hosting a regional or local site xrootd should updgrade it at least to version 4.1.1
    • CMS DPM sites should consider upgrading dpm-xrootd to version 3.5.5 now (from epel-testing) or after mid October (from epel-stable) to fix a problem affecting AAA
    • Tier-1 sites should do their best to avoid scheduling OUTAGE downtimes at the same time as other Tier-1's supporting common LHC VOs. A calendar will be linked in the minutes of the 3 o'clock operations meeting to easily find out if there are already downtimes at a given date
    • The multicore accounting for WLCG is now correct for the 99.5% of the CPU time, with the few remaining issues being addressed. Corrected historical accounting data is expected to be available from the production portal by the end of the month
    • All LHCb sites will soon be asked to deploy the "machine features" functionality

Tuesday 22nd September

  • There was an ops coordination meeting last Thursday: Minutes.
  • Highlights:
    • All 4 experiments have now an agreed workflow with the T0 for tickets that should be handled by the experiment supporters and were accidentally assigned to the T0 service managers.
    • A new FTS3 bug fixing release 3.3.1 is now available.
    • A globus lib issue is causing problems with FTS3 for sites running IPv6.
    • The rogue Glasgow configuration management tool replacing the current configuration for VOMS with the old one was picked up and unfortunately discussed as though sites had not got the message about using the new VOMS.
    • No network problems experienced with the transatlantic link despite 3 out of 4 cables being unavailable.
    • T0 experts are investigating the slow WN performance reported by LHCb and others.
    • A group of experts at CERN and CMS investigate ARGUS authentication problems affecting CMS VOBOXes.
    • T1 & T2 sites please observe the actions requested by ATLAS and CMS (also on the WLCG Operations portal).
  • Actions for Sites; Experiments.

Tuesday 15th September


Tier-1 - Status Page

Tuesday 6th October A reminder that there is a weekly Tier-1 experiment liaison meeting. Notes from the last meeting here

  • The problems with the production FTS service have been resolved. A workaround to the memory leak introduced with the new version has been supplied. This, along with a reduction in the numbers of transfers queued, has enabled the service to return to normal operation.
  • The next step in the upgrade of the Castor Oracle databases to version 11.2.0.4 is taking place today. At the time of the meeting Castor is down. This is the upgrade of the "Pluto" database which hosts the Nameserver as well as the CMS & LHCb stager databases. The previous step took place successfully last Tuesday.
  • The upgrading of the Tier1's link into the RAL core network to 40Gb took place successfully on the morning of Wednesday 30th September.
  • There is an 'At Risk' on the Tier1 tomorrow morning for a UPS/generator load test that will take place from 10:00 to 11:00.
  • There was a problem with glexec for the worker nodes over the weekend caused by a configuration error. This affected our CMS availabilities badly. The problem was fixed yesterday (Monday).
Storage & Data Management - Agendas/Minutes

Wednesday 02 Sep

  • Catch up with MICE
  • How to do transfers of lots of files with FTS3 without the proxy timing out (in particularly if you need it vomsified)

Wednesday 12 Aug

  • sort of housekeeping: data cleanups, catalogue synchronisation - in particular namespace dumps for VOs
  • GridPP storage/data at future events; GridPP35 and Hepix and Cloud data events

Wednesday 08 July

  • Huge backlog of ATLAS data from Glasgow waiting to go to RAL, and oddly varying performance numbers - investigating
  • How physics data is like your Windows 95 games

Wednesday 01 July

  • Feedback on CMS's proposal for listing contents of storage
  • Simple storage on expensive raided disks vs complicated storage on el cheapo or archive drives?

Wednesday 24 June

  • Heard about the Indigo datacloud project, a H2020 project in which STFC is participating
  • Data transfers, theory and practice
    • Somewhat clunky tools to set up but perform well when they run
    • Will continue to work on recommendations/overview document
    • Worth having recommendations/experiences for different audiences - (potential) users, decision makers, techies
Tier-2 Evolution - GridPP JIRA

Tuesday 6 Oct

  • UCL Vac site now running LHCb test of two payloads per dual processor VM. Total of dual processor VMs at UCL now 120.

Tuesday 29 Sep

  • UCL Vac site updated with most recent version of Vac-in-a-Box. Now running ~216 jobs: LHCb MC and ATLAS certification jobs.
  • Drawing up list of tasks needed to be able to run a site for GridPP-supported VOs purely using VMs (e.g. VM certification by experiments etc.)
  • Discussion at GridPP Technical Meeting on storage options, including xrootd-based sites (i.e. xrootd not DPM/dCache)

Tuesday 22 Sep

Thursday 17 Sep

  • Task force to start developing advice for sites to simplify their operation in line with "6.2.5 Evolution of Tier-2 sites" in the GridPP5 proposal.
  • Mailing list for Tier-2 evolution activities: gridpp-t2evo@cern.ch - anyone welcome to join
  • Also GridPP project on the CERN JIRA service for tracking actions. Can be used with a full or lightweight CERN account. You need to be added manually or on the gridpp-ops@cern.ch mailing list to browse issues.


Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06

Tuesday 22nd September

  • Slight delay for Sheffield but overall okay - although there is a gap between today's date and the most recent update for all sites. Perhaps an APEL delay.

Monday 20th July

  • Oxford publishing 0 cores from Cream today. Maybe they forgot to switch one off. Check here.

Tuesday 14th July

  • QMUL and Sheffield appear to be lagging with publishing by a week.
  • Please check your multicore publishing status (especially those sites mentioned in June).
Documentation - KeyDocs

Tuesday 29nd September Steve J: problems with voms server at fnal, voms.fnal.gov, have been detected; I will resolve them soon and may issue an update to Approved VOs, alerting sites with TB_SUPPORT should that occur. Approved VOs potentially affected are CDF, DZERO, LSST. Please do not act act yet.

Tuesday 22nd September

  • Steve J is going to undertake some GridPP/documentation usability testing.

Tuesday 18th August

  • Lydia's document - Setup a system to do data archiving using FTS3

Tuesday 28th July

  • Ewan: /cvmfs/gridpp-vo help ... there's a lot of historical stuff on the GridPP wiki that makes it look a lot more complicated than it is now. We really should have a bit of a clear out at some point.

Tuesday 23rd June

  • Reminder that documents need reviewing!


General note

See the worst KeyDocs list for documents needing review now and the names of the responsible people.

Interoperation - EGI ops agendas

Monday 13th July

  • SR updates (small because it's summer):
      • gfal2 2.9.1
      • storm 1.11.9
      • srm-ifce 1.23.1....
      • gfal2-python 1.8.1
    • In Verification
      • gfal2-plugin-xrootd 0.3.4
  • Accounting
    • [John Gordon] "Of the WLCG sites we now have 97%+ of cpu reported with cores. I expect you all saw my recent email to GDB naming 16 sites. If one German and one Spanish site and the four Russians start publishing we will jump to 99%+"
    • New list of sites needing to update multicore accounting being prepared this evening (Monday) by Vincenzo
  • SL5 decommissioning date March 2016;
  • Next meeting 10th August

Monday 15th June

  • There was an EGI operations meeting today: agenda.
  • New Action: for the NGIs: please start tracking which sites are still using SL5 services: how many services, and for each service if still needed on SL5, if upgrades on SL5 services are expected). A wiki has been provided to record updates. Also interesting to understand who is using Debian.


Monitoring - Links MyWLCG

Tuesday 16th June

  • F Melaccio & D Crooks decided to add a FAQs section devoted to common monitoring issues under the monitoring page.
  • Feedback welcome.


Tuesday 31st March

Monday 7th December

On-duty - Dashboard ROD rota

Tuesday 6th October

  • With the exception of the dashboard getting really confused early in the week as the Nagios instances at Oxford and Lancaster came and went, it's been a fairly quiet week. There are four outstanding tickets:
    • Three for availability / reliabaility (Sussex, Liverpool and Lancaster).
    • One at Bristol for a GridFTP transfer problem.

Tuesday 15th September

  • Generally quiet. QMUL have some grumblyness with the CEs. However, I understand much of this is caused by the batch farm being busy. There are low-availability tickets 'on hold' for Liverpool and UCL.
Rollout Status WLCG Baseline

Tuesday 15th September

Tuesday 12th May

  • MW Readiness WG meeting Wed May 6th at 4pm. Attended by Raul, Matt, Sam and Jeremy.

Tuesday 17th March

  • Daniela has updated the [ https://www.gridpp.ac.uk/wiki/Staged_rollout_emi3 EMI-3 testing table]. Please check it is correct for your site. We want a clear view of where we are contributing.
  • There is a middleware readiness meeting this Wednesday. Would be good if a few site representatives joined.
  • Machine job features solution testing. Fed back that we will only commence tests if more documentation made available. This stops the HTC solution until after CHEP. Is there interest in testing other batch systems? Raul mentioned SLURM. There is also SGE and Torque.

References


Security - Incident Procedure Policies Rota

Monday 5th October

  • Updated IGTF distribution version 1.68 available - https://dist.igtf.net/distribution/igtf/current/
  • Update on incident broadcast EGI-20150925-01 relating to compromised systems in China. - The EGI, WLCG and VO security teams are continuing their investigations. Affected sites and users have been contacted and there is no present indication of further action needed by any site in the UK. However, as more information comes to light, additional updates may be made in the near future and sites are asked as always to read any updates carefully, taking actions as recommended.

Tuesday 29th September

  • Incident broadcast EGI-20150925-01 relating to compromised systems in China.
  • UK security team meeting scheduled for 30th Sept.

Monday 29th September

  • IGTF has released a regular update to the trust anchor repository (1.68) - for distribution ON OR AFTER October 5th


The EGI security dashboard.


Services - PerfSonar dashboard | GridPP VOMS

- This includes notifying of (inter)national services that will have an outage in the coming weeks or will be impacted by work elsewhere. (Cross-check the Tier-1 update).

Tuesday 6th October

Tuesday 14th July

  • GridPP35 in September will have a part focus on networking and IPv6. This will include a review of where sites are with their deployment. Please try to firm up dates for your IPv6 availability between now and September. Please update the GridPP IPv6 status table.


Tickets

Monday 5th October 2015, 14.15 BST

22 Open UK Tickets this month, all of them, Site by Site:

SUSSEX
116136 (9/9)
Sussex got a snoplus ticket for a high number of job failures, although simple test jobs ran okay. Matt asks if the problem persists, the reply was a resounding "not sure". In progress (think about closing) (21/9)

RALPP
116652(1/10)
A ticket from CMS, about some important Phedex ritual that must occur on the 3rd of November, when the stars are right. The ticket needs some confirmation and feedback, plus the nomination of one site acolyte to receive the DBParam secrets from CMS - but the ticket only got assigned to sites this morning. Assigned (5/10)

BRISTOL
116651(1/10)
Same as the RALPP ticket, Winnie has volunteered Dr Kreczko for the task. In progress (5/10)

ECDF
95303(Long, long ago)
glexec ticket. On hold (18/5)

DURHAM
116576(1/10)
Atlas ticket asking Durham to delete all files outside of the datadisk path. Oliver asks what this means for the other tokens (I think they can be sacrificed to feed datadisk, but Brian et al can confirm that). Waiting for reply (5/10)

SHEFFIELD
116560(30/9)
Sno+ jobs having trouble at Sheffield. Looks like a proxy going stale problem as only 10 Sno+ jobs at a time can run at Sheffield. Matt M asks if/how the WMS can be notified to stop sending jobs in such a case. In progress (30/9)

114460(18/6)
Gridpp Pilot roles. No news on this for a while, after the last attempt seemed to not quite work. In progress (30/7)

MANCHESTER
116585(1/10)
Biomed ticketed Manchester with problems from their VO nagios box - which Alessandra points out being due to there being no spare cycles for biomed to run on. Assigned (can be put on hold or closed?) (1/10)

LIVERPOOL
116082(7/9)
A classic Rod Availability ticket. On Hold (7/9)

LANCASTER (a little embarrassing that my own site has the most tickets)
116478 (28/9)
Another availability ticket, this time for Lancaster (which has been through the wars in September). Still trying to dig our way out, but even the Admin's broke. On hold (5/10)

116676 (5/10)
Another ROD ticket, Lancaster's not quite out of the woods. We think WMS access is somewhat broken. We have no idea about the sha2 error. In progress (5/10)

116366 (22/9)
Sno+ spotted malloc errors at Lancaster. The problems seemed to survive one batch of fixes, but I asked again if they still see problems after running a good number of jobs over the weekend. Waiting for reply (5/10)

95299 (In a galaxy far, far way)
glexec ticket. This was supposed to be done last week, after I had figured out "the formula" - but then last week happened. On hold (5/10)

QMUL
115959 (31/8)
LHCB job errors at QM, with a 70% pilot failure rate on ce05. Dan couldn't see where things are breaking (only that the CE wasn't publishing to APEL- and asks if this is the cause of the problem?) Waiting for reply (5/10)

116662 (5/10)
LHCB job failures on ce05 - almost certainly a duplicate of 115959, but it might have some useful information in it. Assigned (probably can be closed as a duplicate) (5/10)

IMPERIAL
116650 (1/10)
Imperial's invitation to the CMS Phedex DBParam ritual. Daniela's on it, as well as the other CMS sites. On hold (5/10)

BRUNEL
116649 (1/10)
Brunel's ticket for the great DBParam alignment of 2015. On hold (5/10)

116455 (28/9)
A CMS request to change the xrootd monitoring configs. Did you get round to doing this last week Raul? In progress (29/9)

EFDA-JET
115448 (3/8)
Biomed having trouble tagging the jet CE. The Jet admins think this is the same underlying issues as their other ticket 115496. In progress (25/9)

115496 (5/8)
Biomed unable to remove files from the jet SE. There are clues that suggest that some dns oddness is the cause, but it's not clear. In progress (18/9)

100IT
116358 (22/9)
Ticket complaining about a missing image at the site. Some to and fro, the ball is back in the site's court. In progress (2/10)

TIER 1
116618 (1/10)
The Tier 1's CMS DBParam ritual ticket. In progress (5/10)

Let me know if I missed ought.

T'OTHER VO NAGIOS
At time of writing things looka a bit rough at QM, Liverpool (just getting over their downtime) and for Sno+ at Sheffield (likely related to their ticket).


Tools - MyEGI Nagios

Tuesday 29 Sep 2015

Following an air-conditioning problem at machine room in Oxford Tier-2 site on 26 September, gridppnagios(OX) was shut down and gridppnagios(Lancs) became active instance. Oxford site is in downtime until 1st Oct and it may be extended depending on the situation. VO-Nagios was also unavailable for two days but we have started it yesterday as it is running on a VM. VO-nagios is using oxford SE for replication test so it is failing those tests. I am looking to change to some other SE.

Tuesday 09 June 2015

  • ARC CEs were failing nagios test becuase of non-availability of egi repository. Nagios test compare CA version from EGI repo. It started on 5th June and one of the IP addresses behind webserver was not responding. Problem went away in approximately 3 hours. The same problem started again on 6th June. Finally it was fixed on 8th June. No reason was given in any of the ticket opened regarding this outage.

Tuesday 17th February

  • Another period where message brokers were temporarily unavailable seen yesterday. Any news on the last follow-up?

Tuesday 27th January

  • Unscheduled outage of the EGI message broker (GRNET) caused a short-lived disruption to GridPP site monitoring (jobs failed) last Thursday 22nd January. Suspect BDII caching meant no immediate failover to stomp://mq.cro-ngi.hr:6163/ from stomp://mq.afroditi.hellasgrid.gr:6163/


VOs - GridPP VOMS VO IDs Approved VO table

Tuesday 19th May

  • There is a current priority for enabling/supporting our joining communities.

Tuesday 5th May

  • We have a number of VOs to be removed. Dedicated follow-up meeting proposed.

Tuesday 28th April

  • For SNOPLUS.SNOLAB.CA, the port numbers for voms02.gridpp.ac.uk and voms03.gridpp.ac.uk have both been updated from 15003 to 15503.

Tuesday 31st March

  • LIGO are in need of additional support for debugging some tests.
  • LSST now enabled on 3 sites. No 'own' CVMFS yet.
Site Updates

Tuesday 24th February

  • Next review of status today.

Tuesday 27th January

  • Squids not in GOCDB for: UCL; ECDF; Birmingham; Durham; RHUL; IC; Sussex; Lancaster
  • Squids in GOCDB for: EFDA-JET; Manchester; Liverpool; Cambridge; Sheffield; Bristol; Brunel; QMUL; T1; Oxford; Glasgow; RALPPD.

Tuesday 2nd December

  • Multicore status. Queues available (63%)
    • YES: RAL T1; Brunel; Imperial; QMUL; Lancaster; Liverpool; Manchester; Glasgow; Cambridge; Oxford; RALPP; Sussex (12)
    • NO: RHUL (testing); UCL; Sheffield (testing); Durham; ECDF (testing); Birmingham; Bristol (7)
  • According to our table for cloud/VMs (26%)
    • YES: RAL T1; Brunel; Imperial; Manchester; Oxford (5)
    • NO: QMUL; RHUL; UCL; Lancaster; Liverpool; Sheffield; Durham; ECDF; Glasgow; Birmingham; Bristol; Cambridge; RALPP; Sussex (14)
  • GridPP DIRAC jobs successful (58%)
    • YES: Bristol; Glasgow; Lancaster; Liverpool; Manchester; Oxford; Sheffield; Brunel; IC; QMUL; RHUL (11)
    • NO: Cambridge; Durham; RALPP; RAL T1 (4) + ECDF; Sussex; UCL; Birmingham (4)
  • IPv6 status
    • Allocation - 42%
    • YES: RAL T1; Brunel; IC; QMUL; Manchester; Sheffield; Cambridge; Oxford (8)
    • NO: RHUL; UCL; Lancaster; Liverpool; Durham; ECDF; Glasgow; Birmingham; Bristol; RALPP; Sussex
  • Dual stack nodes - 21%
    • YES: Brunel; IC; QMUL; Oxford (4)
    • NO: RHUL; UCL; Lancaster; Glasgow; Liverpool; Manchester; Sheffield; Durham; ECDF; Birmingham; Bristol; Cambridge; RALPP; Sussex, RAL T1 (15)


Tuesday 21st October

  • High loads seen in xroot by several sites: Liverpool and RALT1... and also Bristol (see Luke's TB-S email on 16/10 for questions about changes to help).

Tuesday 9th September

  • Intel announced the new generation of Xeon based on Haswell.



Meeting Summaries
Project Management Board - MembersMinutes Quarterly Reports

Empty

GridPP ops meeting - Agendas Actions Core Tasks

Empty


RAL Tier-1 Experiment Liaison Meeting (Wednesday 13:30) Agenda Meeting takes place on Vidyo.

Highlights from this meeting are now included in the Tier1 report farther up this page.

WLCG Grid Deployment Board - Agendas MB agendas

Empty



NGI UK - Homepage CA

Empty

Events
UK ATLAS - Shifter view News & Links

Atlas S&C week 2-6 Feb 2015

Production

• Prodsys-2 in production since Dec 1st

• Deployment has not been transparent , many issued has been solved, the grid is filled again

• MC15 is expected to start soon, waiting for physics validations, evgen testing is underway and close to finalised.. Simulation expected to be broadly similar to MC14, no blockers expected.

Rucio

• Rucio in production since Dec 1st and is ready for LHC RUN-2. Some fields need improvements, including transfer and deletion agents, documentation and monitoring.

Rucio dumps available.

Dark data cleaning

files declaration . Only Only DDM ops can issue lost files declaration for now, cloud support needs to fill a ticket.

• Webdav panda functional tests with Hammercloud are ongoing

Monitoring

Main page

DDM Accounting

space

Deletion

ASAP

• ASAP (ATLAS Site Availability Performance) in place. Every 3 months the T2s sites performing BELOW 80% are reported to the International Computing Board.


UK CMS

Empty

UK LHCb

Empty

UK OTHER
  • N/A
To note

  • N/A