Difference between revisions of "Operations Bulletin Latest"

From GridPP Wiki
Jump to: navigation, search
()
()
Line 34: Line 34:
 
* Dan: wms logs from failed ops tests -> GlueCEStateStatus: UNDEFINED turned out to be that the queues (or partitions in slurm speak) were set with MaxTime=inifinte (this is the wall clock time).
 
* Dan: wms logs from failed ops tests -> GlueCEStateStatus: UNDEFINED turned out to be that the queues (or partitions in slurm speak) were set with MaxTime=inifinte (this is the wall clock time).
 
* Andrew: HTCondor Machine/Job Features testing? = Request for volunteers please!
 
* Andrew: HTCondor Machine/Job Features testing? = Request for volunteers please!
 +
* There is a [http://indico.cern.ch/event/394782/ GDB tomorrow]. One topic of wider interest will be the work on lightweight sites.
  
  

Revision as of 11:10, 10 May 2016

Bulletin archive


Week commencing Monday 9th May 2016
Task Areas
General updates

Monday 9th May

  • Upgrades and improvements are being rolled out to the ATLAS Hammer Cloud tests.
  • Te latest WLCG ops meeting was yesterday - worth reading for a general picture of operations. Most issues reported over the last week concerned CERN.
  • There was a GridPP Oversight Committee meeting last week. A positive review.
  • Dan: wms logs from failed ops tests -> GlueCEStateStatus: UNDEFINED turned out to be that the queues (or partitions in slurm speak) were set with MaxTime=inifinte (this is the wall clock time).
  • Andrew: HTCondor Machine/Job Features testing? = Request for volunteers please!
  • There is a GDB tomorrow. One topic of wider interest will be the work on lightweight sites.


Tuesday 3rd May

  • The WLCG T2 reliability/availability figures have arrived.
    • ALICE: All okay.
    • ATLAS:
      • ECDF: 87%:87%
      • BHAM: 78%:78%
    • CMS: All okay.
    • LHCb:
      • QMUL: 12%:12%
      • RHUL: 87%:89%
      • ECDF: 84%:84%.
  • Outcomes from the DPM Collaboration meeting. (see minutes at end)
  • A reminder of next month’s GDB & pre-GDB and confirmation of topics. The pre-GDB, on Tuesday May 10th, will be an initial face to face meeting of the new Traceability & Isolation Working Group. The agenda, which is still being finalised is available here. The GDB will take place on Wednesday May 11th. As well as some more routine updates and reports from the recent HEPiX and HEP Software Foundation workshop reports, there will be an in depth session, convened by Maarten Litmaath and Julia Andrea, focussing on ‘Easy Sites’ work around the streamlining of running and managing especially smaller WLCG grid sites.


WLCG Operations Coordination - AgendasWiki Page

Monday 1st May

  • There was a WLCG ops coordination meeting last Thursday. Agenda | Minutes.
    • SL5 decommissioning (EGI, April 30, 2016). SL5 'ops' service probes will get CRITICAL. The whole retirement process is tracked here.
    • New Traceability and isolation working group will have a dedicated session in May pre-GDB.
    • New Task Force on Accounting review is under preparation to start addressing accounting issues.
    • A detailed review of the Networking and Transfers WG was done. This includes a status report, ongoing actions and future R&D projects. More details in the agenda slides.

Monday 18th April

  • MW Readiness WG achievements October 2015 - March 2016 - link to MB.

Tuesday 22nd March

  • There was a WLCG ops coordination meeting last Thursday.
    • T0 & T1s requested to check 2016 pledges attached to the agenda.
    • The Multicore TF accomplished its mission. Its twiki remains as a documentation source.
    • The gLExec TF also completed. Support will continue. Its twiki is up-to-date.
  • There was a WLCG Middleware Readiness meeting last Wednesday.

Tuesday 15th March

Tier-1 - Status Page

Tuesday 10th May A reminder that there is a weekly Tier-1 experiment liaison meeting. Notes from the last meeting here

  • There was a problem with the cooling in the computer room at the end of yesterday afternoon (Tuesday 9th May). We stopped batch work (paused running jobs and stopped new work starting). The water pumps and then chillers were successfully restarted after around 30 minutes, which was before it was necessary to take any further action. Once the cooling had restarted we un-paused current work. However, new batch jobs were not restarted until this morning.
  • We have an ongoing problem with the tape robot since yesterday (cause unrelated to cooling problem). Some tape mounts don't work. Engineer expected today.
  • Castor 2.1.15 update is waiting. This is pending resolution of a problem around memory usage by the Oracle database behind Castor. We had thought to update the Castor SRMs to version 2.14 in the meantime. However, following advice from CERN this will not be done before the Castor update.
  • "GEN Scratch" storage in Castor will be decommissioned. This has been announced via an EGI broadcast.
  • We are migrating Atlas data from the T10000C to T10000D drives/media. We have moved around 300 out of 1300 tapes so far.
  • Catalin is organizing the CernVM Users Workshop at RAL on the 6-8 June 2016.
  • Technically nothing to do with the Tier1: However, HEP SYSMAN dates for the RAL meeting are 21-23 June, with the first day being a Security training day.
Storage & Data Management - Agendas/Minutes

Wednesday 04 May

  • GridPP/ATLAS input for GDB next week
  • Future evolution of T2s...

Wednesday 27 Apr

  • Report from DPM collaboration meeting
  • GridPP as an einfrastructure (update)

Wednesday 20 Apr

  • Report from GridPP36. Closer to understanding what a future T2 looks like.
  • Report from DataCentreWorld/CloudExpo: Usual bigger and better data centre servers, networks, combined with cloud applications adoption and exciting new IoT devices

Wednesday 06 Apr

  • GridPP is a Science DMZ!
  • Special Kudos to Marcus for all his exciting ZFS work (see blog!)
  • No meeting next week (Wedn. 13th) due to GridPP36

Wednesday 30 Mar

Tier-2 Evolution - GridPP JIRA

Tuesday 5 Apr

  • Vac 00.22 release (01.00 pre-release) ready. Includes Squid-on-hypervisor config in Puppet module.
  • Aim to do Vac 01.00 release immediately after Pitlochry.

Tuesday 22 Mar

  • GridPP DIRAC SAM tests now have gridpp and vo.northgrid.ac.uk pages
  • Some pheno jobs have been run on VAC.Manchester.uk
  • ATLAS cloud init VMs now running at Liverpool too

Tuesday 15 Mar

  • Vac 00.21 deployed on all/some machines at all 5 production Vac sites
  • Oxford Vac-in-a-Box set up in production
  • ATLAS pilot factory at CERN configured for our ATLAS cloud init VMs
  • ATLAS cloud init VMs running production jobs at Manchester and Oxford; being enabled at other sites
  • vo.northgrid.ac.uk DIRAC jobs running in GridPP DIRAC VMs: should work for any GridPP DIRAC VO, since a parameter to the VM config.
  • Multipayload LHCb pilot scripts tested in VMs: same pilot scripts can be used on VM or batch sites in multiprocessor slots.

Monday 29 Feb

  • Vac 00.21 released, with new MJF
  • EGI operations MB presentation positively received


Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06

Tuesday 9th February

  • 4th Feb: The data from the APEL summariser that was fixed yesterday has now propagated through the data pipeline and the Accounting Portal views and the Sync and Pub tests are all working again.
  • Sheffield is slightly behind other sites (but looks normal) and so is QMUL.

Tuesday 24th November

  • Slight delay for Sheffield.

Tuesday 3rd November

  • APEL delay (normal state) Lancaster and Sheffield.

Tuesday 20th October The WLCG MB decided to create a Benchmarking Task force led by Helge Meinhard see talk

Documentation - KeyDocs

Wednesday 4th May

Introduction added to explain _why_ CGROUPS are desired when using ARC/Condor.

https://www.gridpp.ac.uk/wiki/Enable_Cgroups_in_HTCondor


Tuesday 26th April

Statement on recent history of LSST VOMS records. To be discussed at Ops Meeting.

  • Feb 16: 3 x VOMS servers, port 1503, names: voms.fnal.gov (DigiCert-Grid, admin), voms1.fnal.gov (DigiCert-Grid, not admin), voms2.fnal.gov (opensciencegrid , admin).
  • Apr 18: 1 x VOMS server. port 15003, names: voms.fnal.gov (opensciencegrid, admin). Caused by a hitch with security that caused the omission of some data. GGUS: https://ggus.eu/?mode=ticket_info&ticket_id=120925. Fixed.
  • Apr 21: 3 x VOMS servers, port 15003, names: voms.fnal.gov (opensciencegrid, admin), voms1.fnal.gov (opensciencegrid, not admin), voms2.fnal.gov opensciencegrid admin

So, similar to how it was in feb, but DNs (and CA_DNs) of 2 certs changed. But other things to note:

Please discuss.

Monday 18th April

Changes to Approved VOs. I scanned the EGI Operations Portal today, and found the following updates.

  • For CDF, for voms.fnal.gov:15020 server, new DN and CA_DN.
  • For DZERO, for voms.fnal.gov:15002 server, new DN and CA_DN.
  • For LSST, voms1.fnal.gov:15003 and voms2.fnal.gov:15003 are not working properly. voms.fnal.gov:15003 is working properly, but for that, DN and CA_DN are new.

Note: apart from on the UI, it doesn't break to have defunct servers in the config. And since servers have the nasty habit of coming back after they have been offline for a few days or weeks, it may be best to delay removal - I don't know. Anyway, I've updated Approved VOs to show the present parameters and I hope this is helpful. https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs

General note

See the worst KeyDocs list for documents needing review now and the names of the responsible people.

Interoperation - EGI ops agendas

Monday 9th May

  • There was an EGI ops meeting today.
    • SL5 status was reviewed. Services remaining: WMS/LB being decommissioned at Glasgow; Lanacaster BDII scheduled for upgrade; Oxford SAM/VO Nagios. This is only on SL5 and plans across all NGIs to move to a central ARGO service are in place; RAL T1 Castor SRM systems. SRM upgrade waiting on Castor upgrade.


Monday 18th April


Monitoring - Links MyWLCG

Tuesday 1st December


Tuesday 16th June

  • F Melaccio & D Crooks decided to add a FAQs section devoted to common monitoring issues under the monitoring page.
  • Feedback welcome.


Tuesday 31st March

Monday 7th December

On-duty - Dashboard ROD rota

Tuesday 15th March

  • Normal week with few alarms which were fixed in time. Birmingham has a low availability ticket. ECDF has ticket on hold as it test ARC CE and putting this CE in downtime might effect proper job test from ATLAS.


Tuesday 16th February

  • Team membership discussed at yesterday's PMB. We will need to look to the larger GridPP sites for more support.
Rollout Status WLCG Baseline

Tuesday 7th December

  • Raul reports: validation of site BDII on Centos 7 done.

Tuesday 15th September

Tuesday 12th May

  • MW Readiness WG meeting Wed May 6th at 4pm. Attended by Raul, Matt, Sam and Jeremy.


References


Security - Incident Procedure Policies Rota

Tuesday 10th May

Tuesday 3rd May

Monday 25th April

  • Security threat Risk Assessment undertaken by EGI.

Tuesday 19th April

  • Initial results for ARGUS banning tests presented at GridPP36: CREAM/50%, ARC/63%, SRM/31% (% successful banning) will be followed up in the coming weeks.
  • UK NGI Security team meeting Weds 20th.

The EGI security dashboard.


Services - PerfSonar dashboard | GridPP VOMS

- This includes notifying of (inter)national services that will have an outage in the coming weeks or will be impacted by work elsewhere. (Cross-check the Tier-1 update).

Tuesday 10th May

  • Next LHCOPN and LHCONE meeting: Helsinki (FI) 19-20 of September 2016: Agenda.

Tuesday 22nd March

Tuesday 8th December

  • Given the recent network issues and role of GridPP DIRAC, there are plans to have a slave DNS for gridpp.ac.uk at Imperial and hopefully the T1 too. Andrew will seek an update to the whois records and name servers once the other host sites are confirmed.
  • Looking at the network problems this week will be of interest. Duncan supplied this link and Ewan the one for the dual stack instance.
Tickets

Monday 9th May 2016, 13.00 BST
39 Open UK Tickets this month

So long and thanks for all the jobs - decommissioning tickets.
120973 (Glasgow, 2 WMSes and an LB).
121258 (Tier 1, just one WMS).
120664 (Tier 1, GenScratch disk pool).
Not much else to say, nothing to see here. Move along...

NGI
119995 (7/3)
Cleaning up old uncertified NGS sites. Any joy Jeremy? In Progress (18/4)

NEUGRID CVMFS STRATUM PROBLEMS
121179 (2/5)
The neugrid stratum at the Tier 1 isn't behaving - no site was notified with this ticket so it likely dodged people's notice. I sent it RAL's way- feel free to bounce elsewhere if it isn't a problem at the Tier 1. Assigned (9/5) Update - the submitter confirms things are fixed, it looks like the ticket can be closed.

SUSSEX
Ops tests woes:
121028 (25/4) -cream CE
120735 (11/4) -Availability
120714 (9/4) -CA distro.
Being handled as best Jeremy M can - it looks like the last two issues are on the mend. Not sure about the first one.

118289(10/12/15)
gridpp pilot role ticket. No news for a while, but hopefully a familiar face will sweep in and save the day soon. On Hold (25/1)

RALPP
120282 (18/3)
Atlas-centric HTTP support ticket. Chris is putting the site in downtime next week to upgrade the dcache hardware and version, and we'll see how this looks after. On hold (6/5)

118628 (5/1)
LZ pilot ticket. No news after the testing the test version of Arc didn't go so well, and so Chris decided to wait until they have a newer umd4 CE to try it out on, or at least until the fix makes it into the proper repos. The reminder date has passed, any news? On Hold (22/3)

OXFORD
120019 (7/3)
CMS federation subscription change for Oxford. Kashif has worked on this and it looks like it might be fixed. Any news? In progress (29/4)

121139 (22/4)
Enabling skatelescope.eu on the Oxford VOMS. Kashif kicked it but Robert's tests didn't work, so debugging is ongoing. In progress (6/5)

BRISTOL
121024 (25/4)
CMS transfer problems. Phedex was upgraded, but a few more problems with some dodgey datasets came up - Lukasz seems to have it all in hand though. In progress (6/5)

120455 (29/3)
A spot of self-ticketing, here Lukasz asked CMS to validate their new HTCondor CE. A lot of conversation in ticket (some regarding CMS multicore), the last entry has Lukasz looking at the cERN Condor accounting daemon. Assigned (could do with being changed to a different status) (9/5)

BIRMINGHAM
121125 (28/4)
The atlas storage dump is missing at Birmingham - Matt is looking for it (I had more trouble then I should have setting up this cron job at Lancaster - I forgot my 'nix-admining basics! The shame!). In progress (4/5)

120948 (20/4)
Ops availability ticket, on hold whilst things recover - naught to see here. On Hold (20/4)

GLASGOW
120135 (11/3)
Another atlas-centric http TF ticket. The ticket could do with an update/on holding. In progress (7/4)

120351 (22/3)
Enabling LSST at Glasgow, on hold awaiting the new identity management system[1]. Alessandra posted a helpful link here - how goes things? (5/5) Update - I noticed that 117706 (enabling pilots for pheno and friends) is done so hopefully this is just a roundtuit?

[1]Robin's started working on a CentOS7 argus sever build with ansible at Lancaster if that's relevant to your, or anyone else's, interests.

ECDF
121227 (4/5)
A crusty cream CE is causing ROD Ops test failures at ECDF - Andy and Marcus are deciding its fate. In progress (5/5) Update - the immediate issue was solved, and the ticket closed.

120004 (7/3)
The ARCHER facing test CE suffering ROD failures. Was a decision reached about whether or not to put the service in downtime or similar? I see the CE is in a short downtime at the moment. On Hold (25/4) Update - Andy is unsure what to do and has asked for some advice, or if perhaps a special case can be made for this CE in the monitoring/gocdb.

121285 (8/5)
Fleeting atlas transfer problems, caused by a network blip. The blip has passed, and Marcus asks if there are any more problems seen? Waiting for reply (9/5)

SHEFFIELD
121279 (7/5)
Atlas transfer failures - Elena noticed that the files don't actually exist at Sheffield and will declare them lost forthwith. In progress (8/5)

MANCHESTER

120998 (22/4)
skatelescope.eu VO creation ticket, nearly done. On Hold (4/5)

120430 (24/3)
Enabling Icecube VO at Manchester. It seems quite involved (gpu jobs sound quite exciting!), things look to be moving along nicely. In progress (5/5)

RHUL
121257 (6/5)
ROD ticket for multiple problems - a CE fell over and is being looked at (the CE problems might explain the BDII failures). In progress (6/5)

121231 (5/5)
LHCB pilots dying at RHUL. After finding a few problems at fixing them Govind wonders if problems persist. Waiting for reply (8/5)

QMUL
121245 (5/5)
Friday ROD issues - looks like multiple CEs were/are having a bad time of it. Assigned (5/5)

120352 (22/3)
Enabling LSST at QM. Alessandra posted the link to the information that Dan asked for. In Progress (5/5)

120204 (15/3)
The well-understood problem with lhcb jobs submitting to QM's dual-stack CEs. Waiting on 120586, where there has been no news for a month, although the last entry seemed positive. On Hold (25/4)

100IT (for 100% completeness)
121189 (2/5) - Being handled.
121271 (6/5) - Assigned
(interestingly this ticket asks for support for dteam as a child of 121262).

And Finally...

THE TIER 1
120810 (13/4)
Biomed asked that their castor storage pool that's being decommissioned (see 120664) be set to read-only prior to the decommissioning date. Gareth pointed out that this request is redundant, as the disk pool is set to be made read only as detailed in the decommissioning announcement. On Hold (27/4)

120350(22/3)
Enabling LSST at RAL. Andrew L reports good progress, still some work to go through. In progress (6/5)

https://ggus.eu/?mode=ticket_info&ticket_id=120920 (19/4)
Sno+ having xrootd problems at RAL. A lot of back and forth going on, the issue is being worked on. In progress (6/5)

117683 (18/11/15)
Castor not publishing glue2. This is being worked on slowly in the background, requires no small amount of dev work. On Hold (5/4)

119841 (1/3)
HTTP support ticket from the HTTP TF. On Hold whilst the developers are consulted. On Hold (26/4)

120954 (21/4)
SRM endpoint simplification for LHCB. At last check it looked good to remove the old alias, with a thumbs up from LHCB. Waiting fore reply (should be "In progress" I think) (3/5)

121147 (29/4)
CMS file reading failures at the Tier 1. Andrew L checked things and they looked okay, and asked for some clarification and extra information but no word back. Waiting for reply (29/4)


Tools - MyEGI Nagios

Tuesday 5th April 2016

Oxford had a scheduled network warning so active nagios instance was moved from Oxford to Lancaster. I am not planning to move it back to Oxford for the time being.


Tuesday 26th Jan 2016

One of the message broker was in downtime for almost three days. Nagios probes picks up a random message broker and failover is not working so a lot of ops jobs hanged for long time. Its a known issue and unlikely to be fixed as SAM Nagios is in its last leg. Monitoring is moving to ARGO and many things are not clear at the moment.

Monday 30th November

  • The SAM/ARGO team has created a document describing Availability reliability calculation in ARGO tool.


Tuesday 6 Oct 2015

Moved Gridppnagios instance back to Oxford from Lancaster. It was kind of double whammy as both sites went down together. Fortunately Oxford site was partially working so we managed to start SAM Nagios at Oxford. Sam tests were unavailable for few hours but no affect on egi availibilty/reliability. Sites can have a look at https://mon.egi.eu/myegi/ss/ for a/r status.

Tuesday 29 Sep 2015

Following an air-conditioning problem at machine room in Oxford Tier-2 site on 26 September, gridppnagios(OX) was shut down and gridppnagios(Lancs) became active instance. Oxford site is in downtime until 1st Oct and it may be extended depending on the situation. VO-Nagios was also unavailable for two days but we have started it yesterday as it is running on a VM. VO-nagios is using oxford SE for replication test so it is failing those tests. I am looking to change to some other SE.

VOs - GridPP VOMS VO IDs Approved VO table

Tuesday 19th May

  • There is a current priority for enabling/supporting our joining communities.

Tuesday 5th May

  • We have a number of VOs to be removed. Dedicated follow-up meeting proposed.

Tuesday 28th April

  • For SNOPLUS.SNOLAB.CA, the port numbers for voms02.gridpp.ac.uk and voms03.gridpp.ac.uk have both been updated from 15003 to 15503.

Tuesday 31st March

  • LIGO are in need of additional support for debugging some tests.
  • LSST now enabled on 3 sites. No 'own' CVMFS yet.
Site Updates

Tuesday 23rd February

  • For January:

ALICE: All okay.

RHUL 89%:89% Lancaster 0%:0%

RALPP: 80%::80%

RALPP: 77%:77%

  • Site responses:
    • RHUL: The largest problem was related to the SRM. The DPM version was upgraded and it took several weeks to get it working again (13 Jan onwards). Several short-lived occurrences of running out of space on the SRM for non-ATLAS VOs. For around 3 days (15-17 Jan) the site suffered from a DNS configuration error by their site network manager which removed their SRM from the DNS, causing external connections such as tests and transfers to fail. For one day (25 Jan) the site network was down for upgrade to the 10Gb link to JANET. Some unexpected problems occurred extending the interruption from an hour to a day. The link has been successfully commissioned.
    • Lancaster: The ASAP metric for Lancaster for January is 97.5 %. There is a particular problem with ATLAS SAM tests which doesn’t affect the site activity in production and analysis and this relates to the path name being too long. A re-calculation has been performed.
    • RALPP: Both CMS and LHCb low figures are due to specific CMS jobs overloading the site SRM head node. The jobs should have stopped now.



Meeting Summaries
Project Management Board - MembersMinutes Quarterly Reports

Empty

GridPP ops meeting - Agendas Actions Core Tasks

Empty


RAL Tier-1 Experiment Liaison Meeting (Wednesday 13:30) Agenda Meeting takes place on Vidyo.

Highlights from this meeting are now included in the Tier1 report farther up this page.

WLCG Grid Deployment Board - Agendas MB agendas

Empty



NGI UK - Homepage CA

Empty

Events
UK ATLAS - Shifter view News & Links

Atlas S&C week 2-6 Feb 2015

Production

• Prodsys-2 in production since Dec 1st

• Deployment has not been transparent , many issued has been solved, the grid is filled again

• MC15 is expected to start soon, waiting for physics validations, evgen testing is underway and close to finalised.. Simulation expected to be broadly similar to MC14, no blockers expected.

Rucio

• Rucio in production since Dec 1st and is ready for LHC RUN-2. Some fields need improvements, including transfer and deletion agents, documentation and monitoring.

Rucio dumps available.

Dark data cleaning

files declaration . Only Only DDM ops can issue lost files declaration for now, cloud support needs to fill a ticket.

• Webdav panda functional tests with Hammercloud are ongoing

Monitoring

Main page

DDM Accounting

space

Deletion

ASAP

• ASAP (ATLAS Site Availability Performance) in place. Every 3 months the T2s sites performing BELOW 80% are reported to the International Computing Board.


UK CMS

Empty

UK LHCb

Empty

UK OTHER
  • N/A
To note

  • N/A