Operations Bulletin Latest

From GridPP Wiki
Jump to: navigation, search

Bulletin archive

Week commencing Monday 18th January 2016
Task Areas
General updates

Tuesday 26th January

  • RHUL: spacetoken for snoplus?
  • Winnie: CREAM-CEs red, "No handlers could be found for logger "stomp.py""
  • Elena: how to limit the number of running jobs per user in condor -> Concurrency Limits.
  • DIRAC File Catalog Command Line Interface guide added to the GridPP User Guide by Tom.
  • What to update when adding a VO (the LSST example!).
  • Notes from the January GDB are now available.
  • Publishing CPUs...!
  • Sam: WLCG WORKSHOP Site Feedback on Storage technologies.

Tuesday 19th January

  • There was a GDB last week. Agenda.
  • Additional material can now be found in the HEPSYSMAN meeting and GANGA workshop agenda pages.
  • Monday's ops update is available here.
  • As circulated to TB-SUPPORT: The T2 WLCG A/R results for December are now available.
    • ALICE. All okay.
    • ATLAS.
      • QMUL: 71%: 76%
      • RHUL: 89%: 95%
      • Lancaster: 0%: 0%
    • CMS
      • RALPP: 89%: 89%
    • LHCb
      • Lancaster: 81%: 96%
      • RALPP: 87%: 87%

Tuesday 12th January

  • Reminder: WLCG workshop registration closes on 22nd.
  • GDB this Wednesday with a security focus.

Tuesday 5th January

Tuesday 15th December

  • Simon: Raised a question about switch monitoring.
  • WMSes:
    • How many WMS servers do you have in production?
    • How many and which VOs are enabled?
    • Which VOs are using the service most?
    • If possible, can you provide the jobs number submitted per month (and per VO) through your instances during the last year (Dec 2014 - Nov 2015)?
  • Notes from Thursday's operations meeting.
  • T2 reliability & availability reports for November 2015, with corrections applied.
  • Govind: Process for dealing with lost files.
  • Janet issued a statement regarding the DDoS last week.

WLCG Operations Coordination - AgendasWiki Page

Tuesday 19th January

  • The next WLCG MW readiness group meeting will be on 27th January: Agenda at 15:00 UTC.
  • There is an ops coordination meeting this Thursday at 14:30 UTC: Agenda.

Tuesday 12th January

  • There was an ops coordination meeting last week. Minutes
  • Approach for configuring batch systems (e.g. setting up mem limits).

Tuesday 5th January

  • Ops coordination meeting on 17th December. Minutes.
  • The next meeting is on 7th January. Agenda.

Tuesday 7th December

  • There was a WLCG operations coordination meeting last Thursday. Minutes | Agenda.
    • Please register for the WLCG workshop.
    • News: RedHat now fixing the openldap crash issue affecting Top BDII and ARC-CE. Stay tuned.
    • T0: LSF 9 software deployed on all worker nodes
    • T1: CC-IN2P3 & PIC: Globus host certificate validation changes
    • T2: NTR
    • ALICE: heavy ion run has been smooth from the grid perspective
    • ATLAS: Tier-0 performance in terms of events/second reconstructed from the whole cluster are quite low (few tents of Hz), observed huge I/O wait in Wigner spinning disks nodes. Plan full reprocessing campaign start for the 14th of December.
    • CMS: CMS Tier-0 workflows is driving some CERN Openstack hardware to its limits: GGUS:118056.
    • LHCb: Significant MC generation in-coming. MC simulation workflows have been executed successfully on commercial clouds, on both DBCE (up to 600 simultaneous jobs running) and Azure.
    • glexec: NTR
    • M/J features: Ongoing discussions clarifying key/value pairs. Next steps to review experience with implementations and installations, and update in view of technical note discussions.
    • HTTP TF: NTR
    • Infro sys future: Future Use Cases Document is now ready in the WLCG Document Repository (PDF). Looking at information system owned by WLCG (an interesting idea). Starting to prepare a Roadmap to GLUE 2.0.
    • IPv6: VOMS still does not work with IPv6. ARGUS not really tested but no problem expected as Java has good IPv6 support.
    • MW readiness: Virtual meeting on 2nd December. Verification workflows in progress listed here.
    • MC: NTR
    • Network/transfer metrics: NTR
    • RFC proxies: CMS have switched test pilot factories to RFC proxies.
    • Squid monitoring/HTTP discovery: NTR

Monday 30th November

Tier-1 - Status Page

Tuesday 26th January A reminder that there is a weekly Tier-1 experiment liaison meeting. Notes from the last meeting here

  • We are investigating why LHCb batch jobs sometimes fail to write results back to Castor (and the sometimes fail to write remotely as well). A recent change has improved, but not fixed, this problem.
  • We plan to migrate Atlas and LHCb data from the T10KC (5TB) to T10KD (8TB) generation tapes. Details (including timings) yet to be finalized but likely to start with Atlas first.
  • We have seen some problems with the CMSTape instance. Two out of the five disk servers in CMSTape were taken out of service on Monday (11th) owing to double disk failures. Some other parameter changes were made to optimise throughput. Service being kept under review. An additional disk server was added at the end of last week, with plans to add a further one in the next week or two.
  • We have seen a higher rate of problems on soem disk servers in recent weeks. These are mainly individual disk failures. This is being reviewed.
  • We are working a refresh of the database system behind the LFC.
Storage & Data Management - Agendas/Minutes

Wednesday 20 Jan

  • Operational issues (DPM database errors) and kablooie
  • hepsysman report
  • Gathering site issues and thoughts in prep'n for the coming WLCG workshop and ATLAS jamboree

Wednesday 13 Jan

  • "Diskless" Tier 2 testing to go ahead anyway at Oxford; also Bristol may be interesting/interested
  • Need feedback from VOs on catalogue format and location before rolling out to remaining sites

Wednesday 06 Jan

  • Merry and happy. Apart from Glasgow losing power, most T2s came through the hols relatively unscathed
  • Key member of T2 storageanddatamanagement supportologists leaving *sad face*
  • Excellent DiRAC progress over hols (thanks, Lydia!), now need Leicester started.
  • More generally, GridPP as a data (only) infrastructure, bring your own compute. Good case, but you need to bring your own catalogue (more or less), too, which may not suit everyone.
  • Will a future T2 be built on CEPH? Not likely...

Wednesday 16 Dec

  • Sam presented at and reports from DPM workshop
  • Last preparations/coordination before the hols
Tier-2 Evolution - GridPP JIRA

Monday 25 Jan

  • Vac 00.20.00 released. Emulates OpenStack environment for VMs, Cloud Init, contextualization from HTTP.
  • Restarted testing of Cloud Init ATLAS VMs

Tuesday 19 Jan

  • Vcycle now supports EC2 API, and preparing to test with Open Nebula at RAL Tier1
  • Next Vac release (00.20) being tested with LHCb production. This removes the need to have an internal NFS server and supports VMs using Cloud Init.
  • Some work last month on revised ATLAS VMs; hoping to converge with VMs run at CERN and on HLT.
  • Vac-in-a-Box updated for Vac 00.20 and NFS-less operation, and numerous feature requests (e.g. bulk adding of hypervisors)

Tuesday 8 Dec

  • Liverpool has set up Vac with 126 VM slots. LHCb production MC and GridPP SAM tests running.
  • GridPP DIRAC VMs now working with new dirac.gridpp.ac.uk service: needed an ad-hoc configuration value adding (/Resources/Computing/CEDefaults/VirtualOrganization=gridpp) for the pilots due to the multi-VO support (GRIDPP-9)
  • For rollout, would like to set up a JIRA component for each site (several exist already.) Will be mailing sites to get the site contacts involved before we add each site.

Tuesday 1 Dec

  • Vac (0.20pre) now provides contexualization, metadata (EC2/OpenStack), and Machine/Job Features via HTTP rather than ISO image and NFS. (GRIDPP-27). This should allow VMs expecting to be run by OpenStack to work on Vac without modification.
  • CernVM image signature checking, and APEL Sync record generation (GRIDPP-10) are now also in Vac 0.20pre.

Tuesday 24 Nov

  • Cloud Init demonstrated with modified version of (old) GridPP DIRAC VMs
  • Cloud Init support in Vac 0.20pre (GRIDPP-27)
  • Progress on VMs for new GridPP DIRAC service: multi-VO config currently preventing matching. (GRIDPP-9)

Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06

Tuesday 24th November

  • Slight delay for Sheffield.

Tuesday 3rd November

  • APEL delay (normal state) Lancaster and Sheffield.

Tuesday 20th October The WLCG MB decided to create a Benchmarking Task force led by Helge Meinhard see talk

Documentation - KeyDocs

Tuesday 12th January

  • The VOID cards (and hence the Yaim records) for CDF, PLANCK, SUPERBVO, LSST, MAGIC and ENMR have changed a bit. Sites that support any of these may want to have a look. See the GridPP approved VOs page.
  • WLCG Information System Evolution Task Force is drafting refined definitions for LOG_CPU and PHYS_CPU, as well as the benchmark/calibration process. Progress is documented in this agenda:

In particular, sites should note the 'BenchmarkingProcess.txt' (attached to agenda) which lays out in general terms how to run benchmark instances to obtain maximum throughput, and the GridPP Publishing Tutorial (https://www.gridpp.ac.uk/wiki/Publishing_tutorial) which WLCG propose to adopt (with some modifications.)

Tuesday 1st December

  • Sixt and hone have been removed from the GridPP list.

Tuesday 24th November Steve J: problems with voms server at fnal, voms.fnal.gov, resolved. Approved VOs document updated with newest records for those VOs affected, CDF, DZERO, LSST. Also, note changes to CA_DN for PLANCK and CDF. https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs

Friday 6th Nov, 2015

SteveJ: Advice to admins published about a common GSS error, globus_gsi_callback_module: Could not verify credential etc.


Tuesday 20th October, 2015

Approved VOs document updated with temporary section for LZ

Tuesday 29nd September Steve J: problems with voms server at fnal, voms.fnal.gov, have been detected; I will resolve them soon and may issue an update to Approved VOs, alerting sites with TB_SUPPORT should that occur. Approved VOs potentially affected are CDF, DZERO, LSST. Please do not act act yet.

General note

See the worst KeyDocs list for documents needing review now and the names of the responsible people.

Interoperation - EGI ops agendas

Monday 11th January

  • A meeting on Monday 11th. Agenda. For the meeting link see Indico.

Tuesday 15th December

There was a meeting yesterday, agenda is here: https://wiki.egi.eu/wiki/Agenda-14-12-2015

  • New meetings calendari: https://indico.egi.eu/indico/categoryDisplay.py?categId=32
  • New summary page: https://wiki.egi.eu/wiki/Operations_Meeting
  • UMD Preview repository
  • UMD releases
  • Decommissioning of dCache 2.6
  • Decommissioning of SL5
    • SL5 services must be decommissioned by end of April 2016; broadcast at December, probes will be warning since February 2016 to start helping with decommissioning
  • APEL on SL5
  • WMS Usage
  • New CE/batch system accounting integration
  • Raised a question on clarifying the future of the VO Nagios in a centralised ARGO world

Monitoring - Links MyWLCG

Tuesday 1st December

Tuesday 16th June

  • F Melaccio & D Crooks decided to add a FAQs section devoted to common monitoring issues under the monitoring page.
  • Feedback welcome.

Tuesday 31st March

Monday 7th December

On-duty - Dashboard ROD rota

Monday 11th January

  • There was a problem with the dashboard during the week, where alarms wouldn't clear even though they had cleared in the nagios. The portal people are aware of this.
  • The current alarms in Manchester (vomsserver) are thought to be fixed - the dashboard just hasn't caught up.

Monday 14th December

  • Janet issue affected ROD dashboard access making it almost unusable.
  • GGUS was updated on Wednesday (09/12/15) and was in downtime for two hours. The update did not go smoothly and the downtime was extended. The GGUS interface did not work through the dashboard for most of the morning on Wednesday.
  • There were quite a few alarms throughout the week - most disappeared without intervention.
  • Three open tickets remain against QMUL, RHUL and Lancaster.

Rollout Status WLCG Baseline

Tuesday 7th December

  • Raul reports: validation of site BDII on Centos 7 done.

Tuesday 15th September

Tuesday 12th May

  • MW Readiness WG meeting Wed May 6th at 4pm. Attended by Raul, Matt, Sam and Jeremy.


Security - Incident Procedure Policies Rota

Tuesday 26th January

  • CVE-2016-0728
  • The IGTF has released a regular update to the trust anchor repository (1.71) - for distribution ON OR AFTER January 25th -

Tuesday 12th January

Tuesday 15th December

  • No updates applicable for the UK
  • News from elsewhere: EGI CSIRT activity is now suspending multiple sites following CVE-2015-7181/2/3, largely due to simple failure to acknowledge the notification, which was all that was required.

The EGI security dashboard.

Services - PerfSonar dashboard | GridPP VOMS

- This includes notifying of (inter)national services that will have an outage in the coming weeks or will be impacted by work elsewhere. (Cross-check the Tier-1 update).

Tuesday 8th December

  • Given the recent network issues and role of GridPP DIRAC, there are plans to have a slave DNS for gridpp.ac.uk at Imperial and hopefully the T1 too. Andrew will seek an update to the whois records and name servers once the other host sites are confirmed.
  • Looking at the network problems this week will be of interest. Duncan supplied this link and Ewan the one for the dual stack instance.

Tuesday 6th October

Tuesday 14th July

  • GridPP35 in September will have a part focus on networking and IPv6. This will include a review of where sites are with their deployment. Please try to firm up dates for your IPv6 availability between now and September. Please update the GridPP IPv6 status table.


Monday 25th January 2016, 14.30 GMT

Looks like hepgrid2.ph.liv.ac.uk at Liverpool is playing up for all VOs, and the Sheffield SE is misbehaving for the gripp VO. Other then that it looks clean.

43 Open UK Tickets this week.

That ticket to the NGI...
118930 (18/1)
Steve J put in a comprehensive reply about what Liverpool do to get their publishing kinda right. The view on this ticket from last week was to close it with a <carefully|harshly> worded statement about why this is a bit of a pointless request. Who was formulating the reply? If it was me I dropped that ball! Assigned (19/1)

Pilots Problems.
BRUNEL: 117710 Pheno. On Hold (19/11/15)
QMUL: 117723 Pheno - hopefully sorted. Waiting for reply (25/1)
SHEFFIELD: 114460 gridpp et al. In Progress (20/1)
RALPP: 118628 LZ (and maybe LSST?). In progress (14/1)

We have a few pilot rollout tickets, the last two being worked on but proving problematic.

119027 (22/1)
As seen on the gridpp-storage list, Sno+ have asked RHUL (and will no doubt as others) for storage space (~20TB). In progress (22/1)

(for the interest of others the Govind's other thread on gridpp-storage was likely triggered by https://ggus.eu/?mode=ticket_info&ticket_id=118553)

118985 (21/1)
QM have banished biomed from their cluster until they have a batch system that can put Biomed jobs in a c-group cage (looking at slurm). On Hold (21/1)

118155 (4/12)
Talking of Biomed, they've asked if they've successfully cleaned up all their files on the Birmingham SE - a cheeky uberftp onto your SE suggests the biomed directory is still full of cra.. I mean, files. In Progress (20/1)

HTTP TF Tickets
118787 (ECDF)
118764 (SHEFFIELD)
Feel free to poke the gridpp storage group for help with these. (I left out the 2 Manchester tickets as their immediate showstopper isn't their configs- but they can ask for help too!).

Manchester, Oxford, Birmingham, Sussex, RHUL, Sheffield, Brunel and QMUL still open - a mix of chugging along nicely and being very much "On Hold".

Tools - MyEGI Nagios

Monday 30th November

  • The SAM/ARGO team has created a document describing Availability reliability calculation in ARGO tool.

Tuesday 6 Oct 2015

Moved Gridppnagios instance back to Oxford from Lancaster. It was kind of double whammy as both sites went down together. Fortunately Oxford site was partially working so we managed to start SAM Nagios at Oxford. Sam tests were unavailable for few hours but no affect on egi availibilty/reliability. Sites can have a look at https://mon.egi.eu/myegi/ss/ for a/r status.

Tuesday 29 Sep 2015

Following an air-conditioning problem at machine room in Oxford Tier-2 site on 26 September, gridppnagios(OX) was shut down and gridppnagios(Lancs) became active instance. Oxford site is in downtime until 1st Oct and it may be extended depending on the situation. VO-Nagios was also unavailable for two days but we have started it yesterday as it is running on a VM. VO-nagios is using oxford SE for replication test so it is failing those tests. I am looking to change to some other SE.

VOs - GridPP VOMS VO IDs Approved VO table

Tuesday 19th May

  • There is a current priority for enabling/supporting our joining communities.

Tuesday 5th May

  • We have a number of VOs to be removed. Dedicated follow-up meeting proposed.

Tuesday 28th April

  • For SNOPLUS.SNOLAB.CA, the port numbers for voms02.gridpp.ac.uk and voms03.gridpp.ac.uk have both been updated from 15003 to 15503.

Tuesday 31st March

  • LIGO are in need of additional support for debugging some tests.
  • LSST now enabled on 3 sites. No 'own' CVMFS yet.
Site Updates

Tuesday 24th February

  • Next review of status today.

Tuesday 27th January

  • Squids not in GOCDB for: UCL; ECDF; Birmingham; Durham; RHUL; IC; Sussex; Lancaster
  • Squids in GOCDB for: EFDA-JET; Manchester; Liverpool; Cambridge; Sheffield; Bristol; Brunel; QMUL; T1; Oxford; Glasgow; RALPPD.

Tuesday 2nd December

  • Multicore status. Queues available (63%)
    • YES: RAL T1; Brunel; Imperial; QMUL; Lancaster; Liverpool; Manchester; Glasgow; Cambridge; Oxford; RALPP; Sussex (12)
    • NO: RHUL (testing); UCL; Sheffield (testing); Durham; ECDF (testing); Birmingham; Bristol (7)
  • According to our table for cloud/VMs (26%)
    • YES: RAL T1; Brunel; Imperial; Manchester; Oxford (5)
    • NO: QMUL; RHUL; UCL; Lancaster; Liverpool; Sheffield; Durham; ECDF; Glasgow; Birmingham; Bristol; Cambridge; RALPP; Sussex (14)
  • GridPP DIRAC jobs successful (58%)
    • YES: Bristol; Glasgow; Lancaster; Liverpool; Manchester; Oxford; Sheffield; Brunel; IC; QMUL; RHUL (11)
    • NO: Cambridge; Durham; RALPP; RAL T1 (4) + ECDF; Sussex; UCL; Birmingham (4)
  • IPv6 status
    • Allocation - 42%
    • YES: RAL T1; Brunel; IC; QMUL; Manchester; Sheffield; Cambridge; Oxford (8)
    • NO: RHUL; UCL; Lancaster; Liverpool; Durham; ECDF; Glasgow; Birmingham; Bristol; RALPP; Sussex
  • Dual stack nodes - 21%
    • YES: Brunel; IC; QMUL; Oxford (4)
    • NO: RHUL; UCL; Lancaster; Glasgow; Liverpool; Manchester; Sheffield; Durham; ECDF; Birmingham; Bristol; Cambridge; RALPP; Sussex, RAL T1 (15)

Tuesday 21st October

  • High loads seen in xroot by several sites: Liverpool and RALT1... and also Bristol (see Luke's TB-S email on 16/10 for questions about changes to help).

Tuesday 9th September

  • Intel announced the new generation of Xeon based on Haswell.

Meeting Summaries
Project Management Board - MembersMinutes Quarterly Reports


GridPP ops meeting - Agendas Actions Core Tasks


RAL Tier-1 Experiment Liaison Meeting (Wednesday 13:30) Agenda Meeting takes place on Vidyo.

Highlights from this meeting are now included in the Tier1 report farther up this page.

WLCG Grid Deployment Board - Agendas MB agendas


NGI UK - Homepage CA


UK ATLAS - Shifter view News & Links

Atlas S&C week 2-6 Feb 2015


• Prodsys-2 in production since Dec 1st

• Deployment has not been transparent , many issued has been solved, the grid is filled again

• MC15 is expected to start soon, waiting for physics validations, evgen testing is underway and close to finalised.. Simulation expected to be broadly similar to MC14, no blockers expected.


• Rucio in production since Dec 1st and is ready for LHC RUN-2. Some fields need improvements, including transfer and deletion agents, documentation and monitoring.

Rucio dumps available.

Dark data cleaning

files declaration . Only Only DDM ops can issue lost files declaration for now, cloud support needs to fill a ticket.

• Webdav panda functional tests with Hammercloud are ongoing


Main page

DDM Accounting




• ASAP (ATLAS Site Availability Performance) in place. Every 3 months the T2s sites performing BELOW 80% are reported to the International Computing Board.





  • N/A
To note

  • N/A