Operations Bulletin Latest

From GridPP Wiki
Jump to: navigation, search

Bulletin archive


Week commencing Monday 8th February 2016
Task Areas
General updates

Tuesday 9th February

  • The WLCG Collaboration meeting took place last week: Agenda.
  • Alessandra: Changes of memory settings in panda
  • CHEP2016 first bulletin.
  • NOTICE: Java upgrade impacts experiment activities - a recent set of java 6-7-8 openjdk upgrades on SL5/SL6/SL7 disable support for the MD5 hash algorithm in certificates and proxies requiring sysadmin intervention.
  • RIPE Academic Cooperation Initiative.
  • voms2 for lsst


Tuesday 26th January

  • RHUL: spacetoken for snoplus?
  • Winnie: CREAM-CEs red, "No handlers could be found for logger "stomp.py""
  • Elena: how to limit the number of running jobs per user in condor -> Concurrency Limits.
  • DIRAC File Catalog Command Line Interface guide added to the GridPP User Guide by Tom.
  • What to update when adding a VO (the LSST example!).
  • Notes from the January GDB are now available.
  • Publishing CPUs...!
  • Sam: WLCG WORKSHOP Site Feedback on Storage technologies.

Tuesday 19th January

  • There was a GDB last week. Agenda.
  • Additional material can now be found in the HEPSYSMAN meeting and GANGA workshop agenda pages.
  • Monday's ops update is available here.
  • As circulated to TB-SUPPORT: The T2 WLCG A/R results for December are now available.
    • ALICE. All okay.
    • ATLAS.
      • QMUL: 71%: 76%
      • RHUL: 89%: 95%
      • Lancaster: 0%: 0%
    • CMS
      • RALPP: 89%: 89%
    • LHCb
      • Lancaster: 81%: 96%
      • RALPP: 87%: 87%


Tuesday 12th January

  • Reminder: WLCG workshop registration closes on 22nd.
  • GDB this Wednesday with a security focus.


Tuesday 5th January

WLCG Operations Coordination - AgendasWiki Page

Tuesday 9th February

  • The next ops coordination meeting will be on 18th February.

Tuesday 26th January

Tuesday 19th January

  • The next WLCG MW readiness group meeting will be on 27th January: Agenda at 15:00 UTC.
  • There is an ops coordination meeting this Thursday at 14:30 UTC: Agenda.


Tier-1 - Status Page

Tuesday 9th February A reminder that there is a weekly Tier-1 experiment liaison meeting. Notes from the last meeting here

  • We are investigating why LHCb batch jobs sometimes fail to write results back to Castor (and the sometimes fail to write remotely as well). A recent change has improved, but not fixed, this problem.
  • We will migrate Atlas and LHCb data from the T10KC (5TB) to T10KD (8TB) generation tapes. Details (including timings) yet to be finalized but likely to start with Atlas first.
  • We have seen a higher rate of problems on some disk servers in recent weeks. These are mainly individual disk failures. This is being reviewed. We had significant data loss from a disk server in AtlasScratchDisk when it suffered a triple disk failure a week ago.
  • We are working a refresh of the database system behind the LFC.
  • All ARC-CEs have been updated to version 5.0.5).
  • We are preparing to put a HAProxy load balancer in front of the FTS service. Initially this will be for our "test" FTS3 service (used by Atlas).
Storage & Data Management - Agendas/Minutes

Wednesday 20 Jan

  • Operational issues (DPM database errors) and kablooie
  • hepsysman report
  • Gathering site issues and thoughts in prep'n for the coming WLCG workshop and ATLAS jamboree

Wednesday 13 Jan

  • "Diskless" Tier 2 testing to go ahead anyway at Oxford; also Bristol may be interesting/interested
  • Need feedback from VOs on catalogue format and location before rolling out to remaining sites

Wednesday 06 Jan

  • Merry and happy. Apart from Glasgow losing power, most T2s came through the hols relatively unscathed
  • Key member of T2 storageanddatamanagement supportologists leaving *sad face*
  • Excellent DiRAC progress over hols (thanks, Lydia!), now need Leicester started.
  • More generally, GridPP as a data (only) infrastructure, bring your own compute. Good case, but you need to bring your own catalogue (more or less), too, which may not suit everyone.
  • Will a future T2 be built on CEPH? Not likely...

Wednesday 16 Dec

  • Sam presented at and reports from DPM workshop
  • Last preparations/coordination before the hols
Tier-2 Evolution - GridPP JIRA

Monday 8 Feb

  • Cloud Init ATLAS VMs successfully running production jobs at Manchester. Looking at logging and VM lifetime.

Monday 25 Jan

  • Vac 00.20.00 released. Emulates OpenStack environment for VMs, Cloud Init, contextualization from HTTP.
  • Restarted testing of Cloud Init ATLAS VMs, and now getting jobs running to Finished state.

Tuesday 19 Jan

  • Vcycle now supports EC2 API, and preparing to test with Open Nebula at RAL Tier1
  • Next Vac release (00.20) being tested with LHCb production. This removes the need to have an internal NFS server and supports VMs using Cloud Init.
  • Some work last month on revised ATLAS VMs; hoping to converge with VMs run at CERN and on HLT.
  • Vac-in-a-Box updated for Vac 00.20 and NFS-less operation, and numerous feature requests (e.g. bulk adding of hypervisors)


Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06

Tuesday 9th February

  • 4th Feb: The data from the APEL summariser that was fixed yesterday has now propagated through the data pipeline and the Accounting Portal views and the Sync and Pub tests are all working again.

Tuesday 24th November

  • Slight delay for Sheffield.

Tuesday 3rd November

  • APEL delay (normal state) Lancaster and Sheffield.

Tuesday 20th October The WLCG MB decided to create a Benchmarking Task force led by Helge Meinhard see talk

Documentation - KeyDocs

Tuesday 9th February

  • Guidelines for using the DIRAC command line tools and the DIRAC File Catalog metadata functionality to the UserGuide.

Tuesday 12th January

  • The VOID cards (and hence the Yaim records) for CDF, PLANCK, SUPERBVO, LSST, MAGIC and ENMR have changed a bit. Sites that support any of these may want to have a look. See the GridPP approved VOs page.
  • WLCG Information System Evolution Task Force is drafting refined definitions for LOG_CPU and PHYS_CPU, as well as the benchmark/calibration process. Progress is documented in this agenda:
https://indico.cern.ch/event/471965/

In particular, sites should note the 'BenchmarkingProcess.txt' (attached to agenda) which lays out in general terms how to run benchmark instances to obtain maximum throughput, and the GridPP Publishing Tutorial (https://www.gridpp.ac.uk/wiki/Publishing_tutorial) which WLCG propose to adopt (with some modifications.)

General note

See the worst KeyDocs list for documents needing review now and the names of the responsible people.

Interoperation - EGI ops agendas

Monday 8th January


Monitoring - Links MyWLCG

Tuesday 1st December


Tuesday 16th June

  • F Melaccio & D Crooks decided to add a FAQs section devoted to common monitoring issues under the monitoring page.
  • Feedback welcome.


Tuesday 31st March

Monday 7th December

On-duty - Dashboard ROD rota

Tuesday 26th January

  • On Friday, one of the message brokers was in downtime and due to a bug, nagios probes were not failing over to the working one.
  • There was another issue which prevented applying a workaround in the gridppnagios server.
  • A new rota is being compiled.

Monday 11th January

  • There was a problem with the dashboard during the week, where alarms wouldn't clear even though they had cleared in the nagios. The portal people are aware of this.
  • The current alarms in Manchester (vomsserver) are thought to be fixed - the dashboard just hasn't caught up.

Monday 14th December

  • Janet issue affected ROD dashboard access making it almost unusable.
  • GGUS was updated on Wednesday (09/12/15) and was in downtime for two hours. The update did not go smoothly and the downtime was extended. The GGUS interface did not work through the dashboard for most of the morning on Wednesday.
  • There were quite a few alarms throughout the week - most disappeared without intervention.
  • Three open tickets remain against QMUL, RHUL and Lancaster.


Rollout Status WLCG Baseline

Tuesday 7th December

  • Raul reports: validation of site BDII on Centos 7 done.

Tuesday 15th September

Tuesday 12th May

  • MW Readiness WG meeting Wed May 6th at 4pm. Attended by Raul, Matt, Sam and Jeremy.


References


Security - Incident Procedure Policies Rota

Monday 8th February

  • EGI SVG Advisory 'HIGH' risk CVE-2016-0728 Linux Kernel vulnerability [EGI-SVG-2016-10376]
  • WLCG Collaboration Workshop: Approximate summary of security session with a ~3yr timeframe (by IanN)
    • Use of federated identity management will continue increase - more reliance/trust on home institutions and vo's to manage and trace users.
    • VMs/Containers/cgroups will replace glexec ("hurrah!") but only as appropriate accountability/traceability policy enforcement mechanisms are put in place ("aww!"). (esp. multi-user pilots etc.)
    • Changing risk assessment: more targeted phishing; incidents on commercial clouds; move to more standard software ....
    • All above drives need for improved monitoring and "intelligence" sharing (SOC model and Sirtfi collaboration etc)
      • Improve incident response support for sites lacking expertise. Perhaps looking at developing monitoring "appliance".

Tuesday 26th January

  • CVE-2016-0728 Linux kernel: use after free in keyring facility local privilege escalation. EGI SVG Advisory in the works. Affects RH7 and derivatives/similar. RH5,RH6 and derivatives are not affected. RH/SL/CentOS updates published 25/01/2016
  • The IGTF has released a regular update to the trust anchor repository (1.71) - for distribution ON OR AFTER January 25th -

Tuesday 12th January

Tuesday 15th December

  • No updates applicable for the UK
  • News from elsewhere: EGI CSIRT activity is now suspending multiple sites following CVE-2015-7181/2/3, largely due to simple failure to acknowledge the notification, which was all that was required.

The EGI security dashboard.


Services - PerfSonar dashboard | GridPP VOMS

- This includes notifying of (inter)national services that will have an outage in the coming weeks or will be impacted by work elsewhere. (Cross-check the Tier-1 update).

Tuesday 8th December

  • Given the recent network issues and role of GridPP DIRAC, there are plans to have a slave DNS for gridpp.ac.uk at Imperial and hopefully the T1 too. Andrew will seek an update to the whois records and name servers once the other host sites are confirmed.
  • Looking at the network problems this week will be of interest. Duncan supplied this link and Ewan the one for the dual stack instance.

Tuesday 6th October

Tuesday 14th July

  • GridPP35 in September will have a part focus on networking and IPv6. This will include a review of where sites are with their deployment. Please try to firm up dates for your IPv6 availability between now and September. Please update the GridPP IPv6 status table.


Tickets

Monday 8th February 2016, 13.30 GMT
44 43 Open UK Tickets this month. Going over all of them, in kinda-alphabetical order.

NGI
118930 (18/1)
That NGI information ticket, linked to the "wrong" (according to some) information being published by the UK arc CEs. This has haunted us for a while, the consensus was the ticket is a load of B-word and not really worth worrying over - but it does warrant a response (from someone over that Steve J).. Assigned (19/1)

SUSSEX
With Matt RB off to pastures green Sussex is in limbo - I'll contact Jeremy M concerning this last week's fresh tickets.

117894 (23/11)
Atlas Consistency Checking. On hold (25/1)

118289 (10/12)
Gridpp Pilots. On hold (25/1)

118337 (14/12)
The Sussex SE was not working for Sno+ - the most serious of these older issues. On hold (25/1)

119383 (5/2)
ROD Availability ticket. Assigned (5/2)

119384 (5/2)
ROD CA distribution ticket. Maybe the two ROD tickets are correlated (i.e. if we fix this one the previous one will soothe itself?) Assigned (5/2)

RALPP
118945 (19/1)
Poor CMS SAM results for RALPP due to digi-reco work pummeling the RALPP storage - Chris has asked for the digi-reco workload to stop at RALPP, then asked for clarification as to why the site was still in unknown state. Waiting for reply (25/1)

118628 (5/1)
LZ Pilot deployment at RALPP. Chris has submitted a bug report to nordugrid to fix the issue (http://bugzilla.nordugrid.org/show_bug.cgi?id=3529), which was fixed and should be available in the next release. On Hold (26/1)

OXFORD
119197 (29/1)
CMS has asked to change some CRAB site configs at T3s - Daniela has ashed Chris B if he's the one looking after this for Oxford. Assigned (3/2)

117892 (23/11)
Atlas consistency checks. Ewan has firmly and clearly put this on the backburner. On hold (12/1)

BIRMINGHAM
118155 (4/12)
Biomed having a clear up of their stuff on the Brummie SE. Franck has given the nod for deleting the dark data left in the DPM after their cleanup efforts. It's on their heads now! In progress (2/2)

117890 (23/11)
Another Atlas Storage Consistency Checking ticket. Any chance to have a look at this again? On hold (15/12)

GLASGOW
117706 (19/11)
Another pilot ticket, this time for pheno. Glasgow were going to roll this into their overhaul of their identity management gubbins, but the Universe messed with their plans. How goes things? On hold (15/1)

118052 (30/11)
HTTP support on the Glasgow SE. I suspect progress here took a similar shoeing to the identity management plan - but the ticket could do with an update (and maybe on holding). In Progress (4/1)

ECDF
118787 (12/1)
Another HTTP ticket. Let us know if you need a hand Marcus and Andy. Or if you're too busy to make this a priority consider on-holding it. In progress (12/1)

95303 (1/7)
Tarball glexec ticket. On hold for a very long time.

An update on this - I managed to put in some good hours on trying to build a relocatable glexec last week, successfully building from source glexec and the lcas/lcmaps stack. *But* I still have rpath problems - short of attacking every lib file with patchelf I'm not sure how to proceed, and the process is such a mess that I'm not sure if I'll ever manage to make it into a proper recipe (much like my cocoa-butter shortbread).

SHEFFIELD
119374 (5/2)
A fresh ticket from Biomed, about incorrect/no dynamic information being published at Sheffield. In progress (5/2)

118789 (12/1)
ROD Information system ticket, almost certainly caused by the same underlying issue. Is the bdii service on your CEs silently dying or failing to update?

114460 (18/6)
Gridpp Pilots. Changes were implemented but at last check things weren't working right. How goes it now? In progress (20/1)

117886 (23/11)
Atlas Storage Consistency Check ticket - any luck with this? On hold (29/1)

118764 (12/1)
HTTP support ticket for the Sheffield SE. Have you had a chance to have a look at this? In progress (25/1)

The Storage list can lend a hand fixing either of these issues (which goes for everyone of course).

MANCHESTER
118679 (7/1)
HTTP support (atlas edition). Hit a problem due to there being no outside-a-space-token space at Manchester. On Hold (12/1)

118674 (7/1)
HTTP Support (lhcb edition). As above. On Hold (12/1)

117885 (23/11)
Atlas Storage Consistency Checks - hit the same problem as the previous 2 tickets. On hold (10/1)

118603 (4/1)
A VOMS ticket rather then a site ticket, removal of the nsccs.ac.uk VO. The VO has been removed from the other UK voms servers. In progress (5/2)

LANCASTER
95299 (1/7)
Lancaster's glexec tarball ticket. See the entry above - although I really need to update the ticket properly! Practice what you preach, Matt! On hold.

RHUL
119380 (5/2)
ROD Low availability ticket - the site is in the green now, so it's the usual 30-day wait. On hold (8/2)

117881 (23/11)
Atlas SCC ticket. On hold until March. On hold (1/2)

QMUL
117723 (19/11)
Pilots at QM. Dan's been working on this, and asked Daniela for a picture of what should be enabled[1] - Any joy? In progress (27/1)

[1] http://www.hep.ph.ic.ac.uk/~dbauer/dirac/site_pilot_status.html

117880 (23/11)
Atlas SCC ticket (wish I had started using that acronym sooner). Just waiting for the nod from atlas that all is well. Dan included the script he uses that may be useful for other STORM sites. Waiting for reply (4/2)

118985 (21/1)
QM has banished biomed from their queues until QM have a cgroupy solution to the ill-behaved biomed user jobs. Biomed have asked that the ban be reconsidered and problem users by dealt with by the VO. QM are perfectly right to say no to this, but it'll be nice to not leave them hanging. On hold (1/2)

119348 (4/2)
LHCB have noticed cvmfs issues on some nodes, which Dan couldn't replicate. Dan ponders that perhaps this is caused by ephemeral memory issues on the nodes, noting more swap being used recently. Waiting for reply (4/2)

119409 (8/2)
Fresh ROD emi glexec ticket - things exploded at the weekend but the QM admins are fighting the good fight. In progress (8/2)

IMPERIAL
119294 - but this got solved by the times I got to it (it concerned a java update breaking md5).

BRUNEL
117878 (23/11)
Atlas SCC - Raul provided an example and is waiting on atlas to give a yay or nay before deploying. Waiting for reply (18/1)

118740 (10/1)
Atlas MCORE problems at Brunel, looks to be caused by some extreme Condor oddness, Raul reconfigured Condor to give a better view. Any job? In progress (25/1)

100IT
119002 (Reopened)
116358 (In Progress)
Not going into detail with these as I'm not sure what the crack is with 100IT.

AND FINALLY...

THE TIER 1
118809 (12/1)
The Tier 1 provided feedback on configuring memory limits for batch jobs, the ticket left open for follow up. On hold (13/1)

116864 (12/10)
CMS AAA tests failing. Andrew L reports that the CASTOR headnode has received what sounds like a big fix which will hopefully improve things. In progress (29/1)

119389 (5/2)
LHCB data transfer problem to RAL. Being looked at. In progress (5/2)

117683 (18/11)
Another publishing ticket. How we love those! This one about CASTOR not publishing GLUE 2. Code was written by Jens and Rob but not integrated, something that works might be a long way off. That was a month ago, any news since? In progress (5/2)

109358 (15/10) or (5/2)
This ticket is weird - it started in a "waiting for reply" state and was apparently issued in 2014! I can't find a ticket with this number in my records though. Sno+ are unable to use the RAL WMS - it's being looked at. In progress (5/2)

Tools - MyEGI Nagios

Tuesday 26th Jan 2016

One of the message broker was in downtime for almost three days. Nagios probes picks up a random message broker and failover is not working so a lot of ops jobs hanged for long time. Its a known issue and unlikely to be fixed as SAM Nagios is in its last leg. Monitoring is moving to ARGO and many things are not clear at the moment.

Monday 30th November

  • The SAM/ARGO team has created a document describing Availability reliability calculation in ARGO tool.


Tuesday 6 Oct 2015

Moved Gridppnagios instance back to Oxford from Lancaster. It was kind of double whammy as both sites went down together. Fortunately Oxford site was partially working so we managed to start SAM Nagios at Oxford. Sam tests were unavailable for few hours but no affect on egi availibilty/reliability. Sites can have a look at https://mon.egi.eu/myegi/ss/ for a/r status.

Tuesday 29 Sep 2015

Following an air-conditioning problem at machine room in Oxford Tier-2 site on 26 September, gridppnagios(OX) was shut down and gridppnagios(Lancs) became active instance. Oxford site is in downtime until 1st Oct and it may be extended depending on the situation. VO-Nagios was also unavailable for two days but we have started it yesterday as it is running on a VM. VO-nagios is using oxford SE for replication test so it is failing those tests. I am looking to change to some other SE.

VOs - GridPP VOMS VO IDs Approved VO table

Tuesday 19th May

  • There is a current priority for enabling/supporting our joining communities.

Tuesday 5th May

  • We have a number of VOs to be removed. Dedicated follow-up meeting proposed.

Tuesday 28th April

  • For SNOPLUS.SNOLAB.CA, the port numbers for voms02.gridpp.ac.uk and voms03.gridpp.ac.uk have both been updated from 15003 to 15503.

Tuesday 31st March

  • LIGO are in need of additional support for debugging some tests.
  • LSST now enabled on 3 sites. No 'own' CVMFS yet.
Site Updates

Tuesday 24th February

  • Next review of status today.

Tuesday 27th January

  • Squids not in GOCDB for: UCL; ECDF; Birmingham; Durham; RHUL; IC; Sussex; Lancaster
  • Squids in GOCDB for: EFDA-JET; Manchester; Liverpool; Cambridge; Sheffield; Bristol; Brunel; QMUL; T1; Oxford; Glasgow; RALPPD.

Tuesday 2nd December

  • Multicore status. Queues available (63%)
    • YES: RAL T1; Brunel; Imperial; QMUL; Lancaster; Liverpool; Manchester; Glasgow; Cambridge; Oxford; RALPP; Sussex (12)
    • NO: RHUL (testing); UCL; Sheffield (testing); Durham; ECDF (testing); Birmingham; Bristol (7)
  • According to our table for cloud/VMs (26%)
    • YES: RAL T1; Brunel; Imperial; Manchester; Oxford (5)
    • NO: QMUL; RHUL; UCL; Lancaster; Liverpool; Sheffield; Durham; ECDF; Glasgow; Birmingham; Bristol; Cambridge; RALPP; Sussex (14)
  • GridPP DIRAC jobs successful (58%)
    • YES: Bristol; Glasgow; Lancaster; Liverpool; Manchester; Oxford; Sheffield; Brunel; IC; QMUL; RHUL (11)
    • NO: Cambridge; Durham; RALPP; RAL T1 (4) + ECDF; Sussex; UCL; Birmingham (4)
  • IPv6 status
    • Allocation - 42%
    • YES: RAL T1; Brunel; IC; QMUL; Manchester; Sheffield; Cambridge; Oxford (8)
    • NO: RHUL; UCL; Lancaster; Liverpool; Durham; ECDF; Glasgow; Birmingham; Bristol; RALPP; Sussex
  • Dual stack nodes - 21%
    • YES: Brunel; IC; QMUL; Oxford (4)
    • NO: RHUL; UCL; Lancaster; Glasgow; Liverpool; Manchester; Sheffield; Durham; ECDF; Birmingham; Bristol; Cambridge; RALPP; Sussex, RAL T1 (15)


Tuesday 21st October

  • High loads seen in xroot by several sites: Liverpool and RALT1... and also Bristol (see Luke's TB-S email on 16/10 for questions about changes to help).

Tuesday 9th September

  • Intel announced the new generation of Xeon based on Haswell.



Meeting Summaries
Project Management Board - MembersMinutes Quarterly Reports

Empty

GridPP ops meeting - Agendas Actions Core Tasks

Empty


RAL Tier-1 Experiment Liaison Meeting (Wednesday 13:30) Agenda Meeting takes place on Vidyo.

Highlights from this meeting are now included in the Tier1 report farther up this page.

WLCG Grid Deployment Board - Agendas MB agendas

Empty



NGI UK - Homepage CA

Empty

Events
UK ATLAS - Shifter view News & Links

Atlas S&C week 2-6 Feb 2015

Production

• Prodsys-2 in production since Dec 1st

• Deployment has not been transparent , many issued has been solved, the grid is filled again

• MC15 is expected to start soon, waiting for physics validations, evgen testing is underway and close to finalised.. Simulation expected to be broadly similar to MC14, no blockers expected.

Rucio

• Rucio in production since Dec 1st and is ready for LHC RUN-2. Some fields need improvements, including transfer and deletion agents, documentation and monitoring.

Rucio dumps available.

Dark data cleaning

files declaration . Only Only DDM ops can issue lost files declaration for now, cloud support needs to fill a ticket.

• Webdav panda functional tests with Hammercloud are ongoing

Monitoring

Main page

DDM Accounting

space

Deletion

ASAP

• ASAP (ATLAS Site Availability Performance) in place. Every 3 months the T2s sites performing BELOW 80% are reported to the International Computing Board.


UK CMS

Empty

UK LHCb

Empty

UK OTHER
  • N/A
To note

  • N/A