Difference between revisions of "Operations Bulletin Latest"

From GridPP Wiki
Jump to: navigation, search
Line 442: Line 442:
{| style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0em 0 0 0.3em;"
{| style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0em 0 0 0.3em;"
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: left; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-left: 0.4em; padding-top: 0.1em; padding-bottom: 0.1em;" | Services - [http://psmad.grid.iu.edu/maddash-webui/index.cgi?dashboard=Dual-Stack%20Mesh%20Config PerfSonar dashboard] | [https://voms.gridpp.ac.uk:8443/vomses/ GridPP VOMS]
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: left; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-left: 0.4em; padding-top: 0.1em; padding-bottom: 0.1em;" | Services - [http://psmad.grid.iu.edu/maddash-webui/ PerfSonar dashboard] | [https://voms.gridpp.ac.uk:8443/vomses/ GridPP VOMS]
| style="padding: 0.4em 0.4em 0.4em 0.4em;" |
| style="padding: 0.4em 0.4em 0.4em 0.4em;" |

Revision as of 11:49, 3 November 2015

Bulletin archive

Week commencing 2nd November 2015
Task Areas
General updates

Tuesday 3rd November

  • There is a GDB this week: Agenda.
  • (re)introduction of the STRICT_RFC2818 mechanism in Globus.... See Jens's comments on TB-SUPPORT.
  • Re-allocation of space for ATLAS as a result of cleanup and removal of the ATLASPRODDISK space token.
  • Time out of Nagios glexec probe.
  • The GridPP hardware allocations are just about final so expect figures very soon. Purchases are this financial year.
  • DPM workshop 2015 7th-8th Dec at CERN - Registration is open.
  • The October WLCG A/R figures are now available
    • ALICE. All okay.
    • ATLAS.
      • QMUL: 85%:85%
      • Lancaster: N/a:N/a?
      • Liverpool: 85%:100%
      • Sheffield: 78%:78%
    • CMS. All okay.
    • LHCb
      • QMUL: 79%:79%
      • Liverpool: 86%:100%
      • Sheffield: 86%:86%
      • RAL PPD: 79%:79%
  • Exploring options for VM based sites (in respect of the monitoring within EGI): Perhaps setup a 'community platform'.
  • RCUK Cloud Working Group - a first workshop on the 1st December at Imperial College.
  • From the MB last week:
    • Memory Requirements: LHC experiments all basically agreed that 2GB/core was the baseline but that some (advertised) resources with up to 4GB/core would be valuable for some workflows.
    • February as kick-off for technical evolution groups.
    • PCP - Pre-commercial-procurement and HNSciCloud. This is approved and starts January-16. UK has small involvement. The plan is to build on the hybrid cloud service that results, in order to deploy a European Open Science Cloud funded from the INFRADEV-04 (2016) call

Tuesday 27th October

  • Raja raised: "ARC CE publishing" and querying the BDII.
  • Luke asked about HTC CE documentation links (for European installation).
  • John H asked for comments on a dmlite message after update to dpm 1.8.10.
WLCG Operations Coordination - AgendasWiki Page

Tuesday 27th October

  • 13th MW Readiness WG meeting THIS Wed 28/10 @ 4pm CET in CERN room 28-S-023 or via vidyo.
  • There was a WLCG ops coordination meeting last Thursday: Agenda - Minutes.

Next Meeting is scheduled for Thursday 22nd October

Tuesday 6th October

  • There was a WLCG ops coordination meeting last week. Minutes. Agenda (which has John Gordon's accounting slides).
  • The highlights:
    • dCache sites should install the latest fix for SRM solving a vulnerability
    • All sites hosting a regional or local site xrootd should updgrade it at least to version 4.1.1
    • CMS DPM sites should consider upgrading dpm-xrootd to version 3.5.5 now (from epel-testing) or after mid October (from epel-stable) to fix a problem affecting AAA
    • Tier-1 sites should do their best to avoid scheduling OUTAGE downtimes at the same time as other Tier-1's supporting common LHC VOs. A calendar will be linked in the minutes of the 3 o'clock operations meeting to easily find out if there are already downtimes at a given date
    • The multicore accounting for WLCG is now correct for the 99.5% of the CPU time, with the few remaining issues being addressed. Corrected historical accounting data is expected to be available from the production portal by the end of the month
    • All LHCb sites will soon be asked to deploy the "machine features" functionality

Tuesday 22nd September

  • There was an ops coordination meeting last Thursday: Minutes.
  • Highlights:
    • All 4 experiments have now an agreed workflow with the T0 for tickets that should be handled by the experiment supporters and were accidentally assigned to the T0 service managers.
    • A new FTS3 bug fixing release 3.3.1 is now available.
    • A globus lib issue is causing problems with FTS3 for sites running IPv6.
    • The rogue Glasgow configuration management tool replacing the current configuration for VOMS with the old one was picked up and unfortunately discussed as though sites had not got the message about using the new VOMS.
    • No network problems experienced with the transatlantic link despite 3 out of 4 cables being unavailable.
    • T0 experts are investigating the slow WN performance reported by LHCb and others.
    • A group of experts at CERN and CMS investigate ARGUS authentication problems affecting CMS VOBOXes.
    • T1 & T2 sites please observe the actions requested by ATLAS and CMS (also on the WLCG Operations portal).
  • Actions for Sites; Experiments.

Tuesday 15th September

Tier-1 - Status Page

Tuesday 3rd November A reminder that there is a weekly Tier-1 experiment liaison meeting. Notes from the last meeting here

  • A week ago we had two disk server failures over the weekend (Both part of Atlas Tape). The servers have had disks replaced and re-run the acceptance testing. We anticipate their return to service very soon. There is also one further disk server out of service in AtlasTape from the moment.
  • We are investigating why LHCB batch jobs sometimes fail to write results back to Castor (and the sometimes fail to write remotely as well).
  • As reported this last couple of weeks - we did have a problem with glexec for the worker nodes over a weekend. We are still trying to understand why this problem was not seen during the testing and roll-out of the new worker node configuration.
Storage & Data Management - Agendas/Minutes

Wednesday 28 Oct

  • Summary of "UK T0" workshop - GridPP well represented
  • Sites should not upgrade to DPM 1.8.10 just yet

Wednesday 02 Sep

  • Catch up with MICE
  • How to do transfers of lots of files with FTS3 without the proxy timing out (in particularly if you need it vomsified)

Wednesday 12 Aug

  • sort of housekeeping: data cleanups, catalogue synchronisation - in particular namespace dumps for VOs
  • GridPP storage/data at future events; GridPP35 and Hepix and Cloud data events

Wednesday 08 July

  • Huge backlog of ATLAS data from Glasgow waiting to go to RAL, and oddly varying performance numbers - investigating
  • How physics data is like your Windows 95 games

Wednesday 01 July

  • Feedback on CMS's proposal for listing contents of storage
  • Simple storage on expensive raided disks vs complicated storage on el cheapo or archive drives?

Tier-2 Evolution - GridPP JIRA

Tuesday 3 Nov

  • LHCb prototype of GOCDB pointers to resource BDII done
  • T2C tests at Oxford ongoing

Tuesday 20 Oct

  • LHCb multipilot VMs now in production
  • Support for APEL-Sync records in Vac, but need to co-ordinate with APEL team to validate it. This is to allow pure-VM sites like UCL to pass APEL SAM tests (GRIDPP-10)
  • Last GridPP Technical Meeting decided to test disk-less operation at Oxford for CMS (GRIDPP-20) and LHCb (GRIDPP-21).

Tuesday 6 Oct

  • UCL Vac site now running LHCb test of two payloads per dual processor VM. Total of dual processor VMs at UCL now 120.

Tuesday 29 Sep

  • UCL Vac site updated with most recent version of Vac-in-a-Box. Now running ~216 jobs: LHCb MC and ATLAS certification jobs.
  • Drawing up list of tasks needed to be able to run a site for GridPP-supported VOs purely using VMs (e.g. VM certification by experiments etc.)
  • Discussion at GridPP Technical Meeting on storage options, including xrootd-based sites (i.e. xrootd not DPM/dCache)

Tuesday 22 Sep

Thursday 17 Sep

  • Task force to start developing advice for sites to simplify their operation in line with "6.2.5 Evolution of Tier-2 sites" in the GridPP5 proposal.
  • Mailing list for Tier-2 evolution activities: gridpp-t2evo@cern.ch - anyone welcome to join
  • Also GridPP project on the CERN JIRA service for tracking actions. Can be used with a full or lightweight CERN account. You need to be added manually or on the gridpp-ops@cern.ch mailing list to browse issues.

Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06

Tuesday 3rd November

  • APEL delay (normal state) Lancaster and Sheffield.

Tuesday 20th October The WLCG MB decided to create a Benchmarking Task force led by Helge Meinhard see talk

Tuesday 22nd September

  • Slight delay for Sheffield but overall okay - although there is a gap between today's date and the most recent update for all sites. Perhaps an APEL delay.

Monday 20th July

  • Oxford publishing 0 cores from Cream today. Maybe they forgot to switch one off. Check here.
Documentation - KeyDocs

Tuesday 20th October, 2015

Approved VOs document updated with temporary section for LZ

Tuesday 29nd September Steve J: problems with voms server at fnal, voms.fnal.gov, have been detected; I will resolve them soon and may issue an update to Approved VOs, alerting sites with TB_SUPPORT should that occur. Approved VOs potentially affected are CDF, DZERO, LSST. Please do not act act yet.

Tuesday 22nd September

  • Steve J is going to undertake some GridPP/documentation usability testing.

Tuesday 18th August

  • Lydia's document - Setup a system to do data archiving using FTS3

Tuesday 28th July

  • Ewan: /cvmfs/gridpp-vo help ... there's a lot of historical stuff on the GridPP wiki that makes it look a lot more complicated than it is now. We really should have a bit of a clear out at some point.

Tuesday 23rd June

  • Reminder that documents need reviewing!

General note

See the worst KeyDocs list for documents needing review now and the names of the responsible people.

Interoperation - EGI ops agendas

Tuesday 20th October

Meeting last week: https://wiki.egi.eu/wiki/Agenda-12-10-2015

Monitoring - Links MyWLCG

Tuesday 16th June

  • F Melaccio & D Crooks decided to add a FAQs section devoted to common monitoring issues under the monitoring page.
  • Feedback welcome.

Tuesday 31st March

Monday 7th December

On-duty - Dashboard ROD rota

Tuesday 3rd November

  • The long-standing UCL availability alarm went green yesterday on 29th October. We are not sure why!
  • Quite a lot of activity on the dashboard this week, but only one or two new tickets.
  • Tickets: Five for availability / reliability: Sussex, Sheffield, Liverpool, Lancaster and UCL. Two for GLUE2 validation: Liverpool and QMUL. One for the CEs at QMU

Tuesday 20th October

Lots of bits here and there, but no big pattern. Tickets about CE and storage problems open at several sites. QMUL notable as going on for a while, probably with some kind of configuration problem they're not identifying.

Tuesday 6th October

  • With the exception of the dashboard getting really confused early in the week as the Nagios instances at Oxford and Lancaster came and went, it's been a fairly quiet week. There are four outstanding tickets:
    • Three for availability / reliabaility (Sussex, Liverpool and Lancaster).
    • One at Bristol for a GridFTP transfer problem.

Tuesday 15th September

  • Generally quiet. QMUL have some grumblyness with the CEs. However, I understand much of this is caused by the batch farm being busy. There are low-availability tickets 'on hold' for Liverpool and UCL.
Rollout Status WLCG Baseline

Tuesday 15th September

Tuesday 12th May

  • MW Readiness WG meeting Wed May 6th at 4pm. Attended by Raul, Matt, Sam and Jeremy.

Tuesday 17th March

  • Daniela has updated the [ https://www.gridpp.ac.uk/wiki/Staged_rollout_emi3 EMI-3 testing table]. Please check it is correct for your site. We want a clear view of where we are contributing.
  • There is a middleware readiness meeting this Wednesday. Would be good if a few site representatives joined.
  • Machine job features solution testing. Fed back that we will only commence tests if more documentation made available. This stops the HTC solution until after CHEP. Is there interest in testing other batch systems? Raul mentioned SLURM. There is also SGE and Torque.


Security - Incident Procedure Policies Rota

Tuesday 3rd November

  • EGI SVG Advisory - Various Java CVE's with max CVSS score.

Monday 26th October

Tuesday 20th October

Tuesday 13th October

  • Nothing to report
  • Next UK Security Team meeting scheduled for 28th Oct.

Monday 5th October

  • Updated IGTF distribution version 1.68 available - https://dist.igtf.net/distribution/igtf/current/
  • Update on incident broadcast EGI-20150925-01 relating to compromised systems in China. - The EGI, WLCG and VO security teams are continuing their investigations. Affected sites and users have been contacted and there is no present indication of further action needed by any site in the UK. However, as more information comes to light, additional updates may be made in the near future and sites are asked as always to read any updates carefully, taking actions as recommended.

Tuesday 29th September

  • Incident broadcast EGI-20150925-01 relating to compromised systems in China.
  • UK security team meeting scheduled for 30th Sept.

Monday 29th September

  • IGTF has released a regular update to the trust anchor repository (1.68) - for distribution ON OR AFTER October 5th

The EGI security dashboard.

Services - PerfSonar dashboard | GridPP VOMS

- This includes notifying of (inter)national services that will have an outage in the coming weeks or will be impacted by work elsewhere. (Cross-check the Tier-1 update).

Tuesday 6th October

Tuesday 14th July

  • GridPP35 in September will have a part focus on networking and IPv6. This will include a review of where sites are with their deployment. Please try to firm up dates for your IPv6 availability between now and September. Please update the GridPP IPv6 status table.


Monday 2nd November 2015, 13.30 GMT
22 Open UK Tickets this week. First Monday of the Month, so all the tickets get looked at, however run of the mill they are.

First, the link to all the UK tickets.

116915 (14/10)
Low availability Ops ticket. On holded whilst the numbers sooth themselves. On Hold (23/10)

116865 (12/10)
Sno+ job submission failures. Not much on this ticket since it was set In Progress. Looks like an argus problem. How goes things at Sussex before Matt RB moves on? (We'll miss you Matt!). In progress (20/10)

117261 (28/10)
Atlas jobs failing with stage out failures. Federico notices that the failures are due to odd errors - "file already existing", and that things seem to be calming themselves. He's at a loss of what RALPP can do. Checking the panda link suggests the errors are still there today. Waiting for reply (29/10)

116775 (6/10)
Bristol's CMS glexec ticket. It looks like the solution is to have more cms pool accounts (which of course requires time to deploy). In progress (28/10)

117303 (30/10)
CMS, not Highlander fans, don't seem to believe that There can be only One (glexec ticket). Poor old Bristol seem to be playing whack-a-mole with duplicate tickets. Is there a note that can be left somewhere to stop this happening? Assigned (30/10)

95303 (Long long ago)
Edinburgh's (and indeed Scotgrid's) only ticket is this tarball glexec ticket. A bit more on this later. On hold (18/5)

114460 (18/6)
Gridpp (and other) VO pilot roles at Sheffield. No news for a while, snoplus are trying to use pilot roles now for dirac so this is becoming very relevant. In progress (9/10)

116560 (30/9)
Sno+ jobs failing, likely due to too many being submitted to the 10 slots that Sno+ has. Maybe a WMS scheduling problem - Stephen B has given advice. Elena asked if the problem persisted a few weeks ago. Waiting for reply (12/10)

116967 (17/10)
A ROD availability ticket, on hold as per SOP. On hold (20/10)

116478 (28/9)
Another availability ticket. Autumn was not kind to many of us! On hold (8/10)

116882 (13/10)
Enabling pilot snoplus users at Lancaster. Shouldn't have been a problem, but turned into a bit of a comedy/tragedy of errors by yours truly mucking up. Hopefully fixed now- thanks to Daniela for her patience. In progress (2/11)

95299 (Far far away)
glexec tarball ticket. There's been a lot of communication with the glexec devs about this - the hopefully last hurdle is sorting out the RPATHs for the libraries. It's not a small hurdle though... On hold (2/11)

A ticket about jumbo frame problems, submitted to QM. After Dan provided some education the user replied, in that he only sees this problem at two atlas sites. But he is contacting the network admins at his institution to see if it is their end. On hold (29/10)

117011 (19/10)
ROD ticket for glue-validate errors. Went away for a while after Dan re-yaimed his site bdii, but possibly back again. Daniela suggests re-running the glue-validate test. Reopened (2/11)

116689 (6/10)
Another ROD ticket, where Ops glexec test jobs are seemingly timing out for QM (this is the ticket Daniela mentioned on the ops mailing list). Dan noted that with the cluster half full tests were passing, suggesting some kind of load correlation (but as he also notes - what's getting loaded and causing the problem - Batch, CE or WNs?). Kashif reckons the argus server, and suggests a handy glexec time test which he posted. In progress (2/11)

117324 (2/11)
A fresh looking ROD ticket - Raul had to restart the BDII and hopefully that got it. In progress (2/11)

116358 (22/9)
Missing Image at 100IT. 100IT have asked for more details, no news since. Waiting for reply (19/10)

116866 (12/10)
Snoplus pilot enablement (not actually a word) at the Tier 1. New accounts were being requested after some internal discussion. On hold (19/10)

116864 (12/10)
CMS AAA tests failing (the submitter notes "again..."). There are some oddities with other sites, which might be remote problems, but Andrew notes that previous manual fixes have been overwritten which likely explains why problems came back. In progress (does it need to be waiting for a reply?) (26/10)

117171 (24/10)
LHCB had problems with an arc CE that was misbehaving for everyone. Things were fixed, and this ticket can now be closed. Waiting for reply (can be closed) (27/10)

117277 (30/10)
Atlas have spotted "bring online timeout has been exceeded). This appears to be a mixture of problems adding up, such as a number of borken disk nodes and heavy write access by atlas. In progress (2/11)

117248 (28/10)
I believe related to the discussion on tb-support, this ticket requests that new SRM host certs that meet the requirements specified be requested for the RAL SRMs. Jens was on it, and the new certs are ready to be deployed. In progress (30/10)

Other VO Nagios - some badness at Sussex, but they have a ticket open for that.

Tools - MyEGI Nagios

Tuesday 6 Oct 2015

Moved Gridppnagios instance back to Oxford from Lancaster. It was kind of double whammy as both sites went down together. Fortunately Oxford site was partially working so we managed to start SAM Nagios at Oxford. Sam tests were unavailable for few hours but no affect on egi availibilty/reliability. Sites can have a look at https://mon.egi.eu/myegi/ss/ for a/r status.

Tuesday 29 Sep 2015

Following an air-conditioning problem at machine room in Oxford Tier-2 site on 26 September, gridppnagios(OX) was shut down and gridppnagios(Lancs) became active instance. Oxford site is in downtime until 1st Oct and it may be extended depending on the situation. VO-Nagios was also unavailable for two days but we have started it yesterday as it is running on a VM. VO-nagios is using oxford SE for replication test so it is failing those tests. I am looking to change to some other SE.

Tuesday 09 June 2015

  • ARC CEs were failing nagios test becuase of non-availability of egi repository. Nagios test compare CA version from EGI repo. It started on 5th June and one of the IP addresses behind webserver was not responding. Problem went away in approximately 3 hours. The same problem started again on 6th June. Finally it was fixed on 8th June. No reason was given in any of the ticket opened regarding this outage.

Tuesday 17th February

  • Another period where message brokers were temporarily unavailable seen yesterday. Any news on the last follow-up?

Tuesday 27th January

  • Unscheduled outage of the EGI message broker (GRNET) caused a short-lived disruption to GridPP site monitoring (jobs failed) last Thursday 22nd January. Suspect BDII caching meant no immediate failover to stomp://mq.cro-ngi.hr:6163/ from stomp://mq.afroditi.hellasgrid.gr:6163/

VOs - GridPP VOMS VO IDs Approved VO table

Tuesday 19th May

  • There is a current priority for enabling/supporting our joining communities.

Tuesday 5th May

  • We have a number of VOs to be removed. Dedicated follow-up meeting proposed.

Tuesday 28th April

  • For SNOPLUS.SNOLAB.CA, the port numbers for voms02.gridpp.ac.uk and voms03.gridpp.ac.uk have both been updated from 15003 to 15503.

Tuesday 31st March

  • LIGO are in need of additional support for debugging some tests.
  • LSST now enabled on 3 sites. No 'own' CVMFS yet.
Site Updates

Tuesday 24th February

  • Next review of status today.

Tuesday 27th January

  • Squids not in GOCDB for: UCL; ECDF; Birmingham; Durham; RHUL; IC; Sussex; Lancaster
  • Squids in GOCDB for: EFDA-JET; Manchester; Liverpool; Cambridge; Sheffield; Bristol; Brunel; QMUL; T1; Oxford; Glasgow; RALPPD.

Tuesday 2nd December

  • Multicore status. Queues available (63%)
    • YES: RAL T1; Brunel; Imperial; QMUL; Lancaster; Liverpool; Manchester; Glasgow; Cambridge; Oxford; RALPP; Sussex (12)
    • NO: RHUL (testing); UCL; Sheffield (testing); Durham; ECDF (testing); Birmingham; Bristol (7)
  • According to our table for cloud/VMs (26%)
    • YES: RAL T1; Brunel; Imperial; Manchester; Oxford (5)
    • NO: QMUL; RHUL; UCL; Lancaster; Liverpool; Sheffield; Durham; ECDF; Glasgow; Birmingham; Bristol; Cambridge; RALPP; Sussex (14)
  • GridPP DIRAC jobs successful (58%)
    • YES: Bristol; Glasgow; Lancaster; Liverpool; Manchester; Oxford; Sheffield; Brunel; IC; QMUL; RHUL (11)
    • NO: Cambridge; Durham; RALPP; RAL T1 (4) + ECDF; Sussex; UCL; Birmingham (4)
  • IPv6 status
    • Allocation - 42%
    • YES: RAL T1; Brunel; IC; QMUL; Manchester; Sheffield; Cambridge; Oxford (8)
    • NO: RHUL; UCL; Lancaster; Liverpool; Durham; ECDF; Glasgow; Birmingham; Bristol; RALPP; Sussex
  • Dual stack nodes - 21%
    • YES: Brunel; IC; QMUL; Oxford (4)
    • NO: RHUL; UCL; Lancaster; Glasgow; Liverpool; Manchester; Sheffield; Durham; ECDF; Birmingham; Bristol; Cambridge; RALPP; Sussex, RAL T1 (15)

Tuesday 21st October

  • High loads seen in xroot by several sites: Liverpool and RALT1... and also Bristol (see Luke's TB-S email on 16/10 for questions about changes to help).

Tuesday 9th September

  • Intel announced the new generation of Xeon based on Haswell.

Meeting Summaries
Project Management Board - MembersMinutes Quarterly Reports


GridPP ops meeting - Agendas Actions Core Tasks


RAL Tier-1 Experiment Liaison Meeting (Wednesday 13:30) Agenda Meeting takes place on Vidyo.

Highlights from this meeting are now included in the Tier1 report farther up this page.

WLCG Grid Deployment Board - Agendas MB agendas


NGI UK - Homepage CA


UK ATLAS - Shifter view News & Links

Atlas S&C week 2-6 Feb 2015


• Prodsys-2 in production since Dec 1st

• Deployment has not been transparent , many issued has been solved, the grid is filled again

• MC15 is expected to start soon, waiting for physics validations, evgen testing is underway and close to finalised.. Simulation expected to be broadly similar to MC14, no blockers expected.


• Rucio in production since Dec 1st and is ready for LHC RUN-2. Some fields need improvements, including transfer and deletion agents, documentation and monitoring.

Rucio dumps available.

Dark data cleaning

files declaration . Only Only DDM ops can issue lost files declaration for now, cloud support needs to fill a ticket.

• Webdav panda functional tests with Hammercloud are ongoing


Main page

DDM Accounting




• ASAP (ATLAS Site Availability Performance) in place. Every 3 months the T2s sites performing BELOW 80% are reported to the International Computing Board.





  • N/A
To note

  • N/A