|
|
Line 450: |
Line 450: |
| ===== ===== | | ===== ===== |
| <!-- ******************Edit start********************* -----> | | <!-- ******************Edit start********************* -----> |
− | '''Monday 16th May 2016, 15.00 BST'''<br />
| |
− | 42 Open UK Tickets this week.
| |
| | | |
− | '''GOCDB/VOFEED mismatch tickets'''<br />
| |
− | There are 7 open tickets left from last week's campaign to clean up the VO tags featured in the gocdb. Only the Birmingham ticket is still in the "assigned" state, the rest are undergoing discussion or requesting feedback/clarification.<br />
| |
− | BIRMINGHAM [https://ggus.eu/?mode=ticket_info&ticket_id=121450 121450]<br />
| |
− | RALPP [https://ggus.eu/?mode=ticket_info&ticket_id=121464 121464]<br />
| |
− | LIVERPOOL [https://ggus.eu/?mode=ticket_info&ticket_id=121394 121394]<br />
| |
− | BRUNEL [https://ggus.eu/?mode=ticket_info&ticket_id=121388 121388]<br />
| |
− | BRISTOL [https://ggus.eu/?mode=ticket_info&ticket_id=121386 121386] ''Update - closed, thanks Winnie!''<br />
| |
− | ECDF [https://ggus.eu/?mode=ticket_info&ticket_id=121360 121360]<br />
| |
− | RHUL [https://ggus.eu/?mode=ticket_info&ticket_id=121421 121421]
| |
− |
| |
− | '''QUESTIONING ROD'''<br />
| |
− | [https://ggus.eu/?mode=ticket_info&ticket_id=121465 121465](11/5)<br />
| |
− | This ECDF availability ticket is "on the mend", but Andy has asked how the numbers are calculated. Waiting for reply (16/5)
| |
− | (This goes in hand with Andy's question in ECDF's other ROD ticket [https://ggus.eu/?mode=ticket_info&ticket_id=120004 120004])
| |
− |
| |
− | [https://ggus.eu/?mode=ticket_info&ticket_id=120714 120714] (9/4)<br />
| |
− | I think this Sussex ROD ticket is solved, the link to the tests looks green (in a good way). I think it can be closed? In progress (28/4)
| |
− |
| |
− | '''OXFORD'''
| |
− | [https://ggus.eu/?mode=ticket_info&ticket_id=120019 120019] (7/3)<br />
| |
− | Talking of tickets that probably can be closed, I think this CMS subscription change request issue is solved? Either way it could do with an update. In progress (29/4)
| |
− |
| |
− | '''RHUL'''
| |
− | [https://ggus.eu/?mode=ticket_info&ticket_id=121516 121516] (12/5)<br />
| |
− | A biomed ticket, possibly the same networking problems affecting them that affected atlas jobs (121540). It looks like this ticket snuck past your sentries, and could do with acknowledgment. Assigned (12/5) ''Update- updated and in progress, hope the networking problems go away.''
| |
− |
| |
− | '''BIRMINGHAM''' <br />
| |
− | [https://ggus.eu/?mode=ticket_info&ticket_id=121125 121125] (28/4)<br />
| |
− | Did you chaps have any luck getting your dumps working? Taking a peek myself I see that your dumps directories are still empty. Let us know if you need a hand. In progress (4/5)
| |
− |
| |
− | Any there any other tickets or issues people want bringing up?
| |
− |
| |
− | And finally, the [https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 Other VO Nagios]...
| |
| | | |
| | | |
General updates
|
Tuesday 17th May
- There was a WLCG GDB last week: Agenda | Minutes. Sessions included discussions on lightweight sites.
- Please could everyone update the HS06 table based on new purchases at sites.
- Steve: lcg-ca etc. tests going wrong....
- LSST:UK pilot, write-up... Tom turning it into a use-case page.
- Matt M: condor submission tools (Has anyone installed condor submission tools (direct submission) on a machine with an EMI-UI already installed?).
Monday 9th May
- Upgrades and improvements are being rolled out to the ATLAS Hammer Cloud tests.
- Te latest WLCG ops meeting was yesterday - worth reading for a general picture of operations. Most issues reported over the last week concerned CERN.
- There was a GridPP Oversight Committee meeting last week. A positive review.
- Dan: wms logs from failed ops tests -> GlueCEStateStatus: UNDEFINED turned out to be that the queues (or partitions in slurm speak) were set with MaxTime=inifinte (this is the wall clock time).
- Andrew: HTCondor Machine/Job Features testing? = Request for volunteers please!
- There is a GDB tomorrow. One topic of wider interest will be the work on lightweight sites.
Tuesday 3rd May
- The WLCG T2 reliability/availability figures have arrived.
- ALICE: All okay.
- ATLAS:
- ECDF: 87%:87%
- BHAM: 78%:78%
- CMS: All okay.
- LHCb:
- QMUL: 12%:12%
- RHUL: 87%:89%
- ECDF: 84%:84%.
- Outcomes from the DPM Collaboration meeting. (see minutes at end)
- A reminder of next month’s GDB & pre-GDB and confirmation of topics. The pre-GDB, on Tuesday May 10th, will be an initial face to face meeting of the new Traceability & Isolation Working Group. The agenda, which is still being finalised is available here. The GDB will take place on Wednesday May 11th. As well as some more routine updates and reports from the recent HEPiX and HEP Software Foundation workshop reports, there will be an in depth session, convened by Maarten Litmaath and Julia Andrea, focussing on ‘Easy Sites’ work around the streamlining of running and managing especially smaller WLCG grid sites.
|
WLCG Operations Coordination - AgendasWiki Page
|
Monday 1st May
- There was a WLCG ops coordination meeting last Thursday. Agenda | Minutes.
- SL5 decommissioning (EGI, April 30, 2016). SL5 'ops' service probes will get CRITICAL. The whole retirement process is tracked here.
- New Traceability and isolation working group will have a dedicated session in May pre-GDB.
- New Task Force on Accounting review is under preparation to start addressing accounting issues.
- A detailed review of the Networking and Transfers WG was done. This includes a status report, ongoing actions and future R&D projects. More details in the agenda slides.
Monday 18th April
- MW Readiness WG achievements October 2015 - March 2016 - link to MB.
Tuesday 22nd March
- There was a WLCG ops coordination meeting last Thursday.
- T0 & T1s requested to check 2016 pledges attached to the agenda.
- The Multicore TF accomplished its mission. Its twiki remains as a documentation source.
- The gLExec TF also completed. Support will continue. Its twiki is up-to-date.
- There was a WLCG Middleware Readiness meeting last Wednesday.
Tuesday 15th March
|
Tier-1 - Status Page
|
Tuesday 17th May
A reminder that there is a weekly Tier-1 experiment liaison meeting. Notes from the last meeting here
- There was a significant problem with the Tier1 tape library during last week. We were without tape access, or had only limited tape access, for around four days.
- Castor 2.1.15 update is waiting. This is pending resolution of a problem around memory usage by the Oracle database behind Castor. We had thought to update the Castor SRMs to version 2.14 in the meantime. However, following advice from CERN this will not be done before the Castor update.
- "GEN Scratch" storage in Castor will be decommissioned. This has been announced via an EGI broadcast.
- We are migrating Atlas data from the T10000C to T10000D drives/media. We have moved around 300 out of 1300 tapes so far.
- One of the two batches of new CPU capacity is ready to go into production. Testing of other new capacity harware - i.e. the second batch of worker nodes and the disk - is proceeding OK.
- Catalin is organizing the CernVM Users Workshop at RAL on the 6-8 June 2016.
- Technically nothing to do with the Tier1: However, HEP SYSMAN dates for the RAL meeting are 21-23 June, with the first day being a Security training day.
|
Storage & Data Management - Agendas/Minutes
|
Wednesday 18 May
- EGI data hub; see home page for the presentation download.
Wednesday 04 May
Wednesday 27 Apr
- Report from DPM collaboration meeting
- GridPP as an einfrastructure (update)
Wednesday 20 Apr
- Report from GridPP36. Closer to understanding what a future T2 looks like.
- Report from DataCentreWorld/CloudExpo: Usual bigger and better data centre servers, networks, combined with cloud applications adoption and exciting new IoT devices
Wednesday 06 Apr
- GridPP is a Science DMZ!
- Special Kudos to Marcus for all his exciting ZFS work (see blog!)
- No meeting next week (Wedn. 13th) due to GridPP36
|
Tier-2 Evolution - GridPP JIRA
|
Friday 29 Apr
Tuesday 5 Apr
- Vac 00.22 release (01.00 pre-release) ready. Includes Squid-on-hypervisor config in Puppet module.
- Aim to do Vac 01.00 release immediately after Pitlochry.
Tuesday 22 Mar
- GridPP DIRAC SAM tests now have gridpp and vo.northgrid.ac.uk pages
- Some pheno jobs have been run on VAC.Manchester.uk
- ATLAS cloud init VMs now running at Liverpool too
Tuesday 15 Mar
- Vac 00.21 deployed on all/some machines at all 5 production Vac sites
- Oxford Vac-in-a-Box set up in production
- ATLAS pilot factory at CERN configured for our ATLAS cloud init VMs
- ATLAS cloud init VMs running production jobs at Manchester and Oxford; being enabled at other sites
- vo.northgrid.ac.uk DIRAC jobs running in GridPP DIRAC VMs: should work for any GridPP DIRAC VO, since a parameter to the VM config.
- Multipayload LHCb pilot scripts tested in VMs: same pilot scripts can be used on VM or batch sites in multiprocessor slots.
Monday 29 Feb
- Vac 00.21 released, with new MJF
- EGI operations MB presentation positively received
|
Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06
|
Tuesday 9th February
- 4th Feb: The data from the APEL summariser that was fixed yesterday has now propagated through the data pipeline and the Accounting Portal views and the Sync and Pub tests are all working again.
- Sheffield is slightly behind other sites (but looks normal) and so is QMUL.
Tuesday 24th November
- Slight delay for Sheffield.
Tuesday 3rd November
- APEL delay (normal state) Lancaster and Sheffield.
Tuesday 20th October
The WLCG MB decided to create a Benchmarking Task force led by Helge Meinhard see talk
|
Documentation - KeyDocs
|
Tuesday 17th May
- TW implementing use-cases summary matrix according to these:
- CERN@school - LUCID (full workflow with DIRAC)
- GalDyn (full workflow with DIRAC)
- PRaVDA (full workflow with DIRAC)
- LSST (full workflow with Ganga + DIRAC)
- EUCLID (full workflow at the RAL T1)
- DiRAC (data storage/transfer)
- SNO+ (data transfer/networking)
Wednesday 4th May
Introduction added to explain _why_ CGROUPS are desired when using ARC/Condor.
https://www.gridpp.ac.uk/wiki/Enable_Cgroups_in_HTCondor
Tuesday 26th April
Statement on recent history of LSST VOMS records. To be discussed at Ops Meeting.
- Feb 16: 3 x VOMS servers, port 1503, names: voms.fnal.gov (DigiCert-Grid, admin), voms1.fnal.gov (DigiCert-Grid, not admin), voms2.fnal.gov (opensciencegrid , admin).
- Apr 18: 1 x VOMS server. port 15003, names: voms.fnal.gov (opensciencegrid, admin). Caused by a hitch with security that caused the omission of some data. GGUS: https://ggus.eu/?mode=ticket_info&ticket_id=120925. Fixed.
- Apr 21: 3 x VOMS servers, port 15003, names: voms.fnal.gov (opensciencegrid, admin), voms1.fnal.gov (opensciencegrid, not admin), voms2.fnal.gov opensciencegrid admin
So, similar to how it was in feb, but DNs (and CA_DNs) of 2 certs changed. But other things to note:
Please discuss.
General note
See the worst KeyDocs list for documents needing review now and the names of the responsible people.
|
Interoperation - EGI ops agendas
|
Monday 9th May
- There was an EGI ops meeting today.
- SL5 status was reviewed. Services remaining: WMS/LB being decommissioned at Glasgow; Lanacaster BDII scheduled for upgrade; Oxford SAM/VO Nagios. This is only on SL5 and plans across all NGIs to move to a central ARGO service are in place; RAL T1 Castor SRM systems. SRM upgrade waiting on Castor upgrade.
Monday 18th April
|
Monitoring - Links MyWLCG
|
Tuesday 1st December
Tuesday 16th June
- F Melaccio & D Crooks decided to add a FAQs section devoted to common monitoring issues under the monitoring page.
- Feedback welcome.
Tuesday 31st March
Monday 7th December
|
On-duty - Dashboard ROD rota
|
Tuesday 15th March
- Normal week with few alarms which were fixed in time. Birmingham has a low availability ticket. ECDF has ticket on hold as it test ARC CE and putting this CE in downtime might effect proper job test from ATLAS.
Tuesday 16th February
- Team membership discussed at yesterday's PMB. We will need to look to the larger GridPP sites for more support.
-
|
Rollout Status WLCG Baseline
|
Tuesday 7th December
- Raul reports: validation of site BDII on Centos 7 done.
Tuesday 15th September
Tuesday 12th May
- MW Readiness WG meeting Wed May 6th at 4pm. Attended by Raul, Matt, Sam and Jeremy.
References
|
Security - Incident Procedure Policies Rota
|
Tuesday 17th May
- Updated IGTF distribution version 1.74 available Note: The deployment of 1.74 will prevent monitoring errors when the old NorduGrid (2006) CA expired on May 15th
- Incident EGI-20160509-01 reported to site security contacts last week. No known links or impact on the UK NGI but, as usual, please check the information and recommendations in the report.
- UK NGI Security team meeting Weds 18th.
Tuesday 10th May
Tuesday 3rd May
Monday 25th April
- Security threat Risk Assessment undertaken by EGI.
The EGI security dashboard.
|
|
Services - PerfSonar dashboard | GridPP VOMS
|
- This includes notifying of (inter)national services that will have an outage in the coming weeks or will be impacted by work elsewhere. (Cross-check the Tier-1 update).
Tuesday 10th May
- Next LHCOPN and LHCONE meeting: Helsinki (FI) 19-20 of September 2016: Agenda.
Tuesday 22nd March
Tuesday 8th December
- Given the recent network issues and role of GridPP DIRAC, there are plans to have a slave DNS for gridpp.ac.uk at Imperial and hopefully the T1 too. Andrew will seek an update to the whois records and name servers once the other host sites are confirmed.
- Looking at the network problems this week will be of interest. Duncan supplied this link and Ewan the one for the dual stack instance.
|
Tools - MyEGI Nagios
|
Tuesday 5th April 2016
Oxford had a scheduled network warning so active nagios instance was moved from Oxford to Lancaster. I am not planning to move it back to Oxford for the time being.
Tuesday 26th Jan 2016
One of the message broker was in downtime for almost three days. Nagios probes picks up a random message broker and failover is not working so a lot of ops jobs hanged for long time. Its a known issue and unlikely to be fixed as SAM Nagios is in its last leg. Monitoring is moving to ARGO and many things are not clear at the moment.
Monday 30th November
- The SAM/ARGO team has created a document describing Availability reliability calculation in ARGO tool.
Tuesday 6 Oct 2015
Moved Gridppnagios instance back to Oxford from Lancaster. It was kind of double whammy as both sites went down together. Fortunately Oxford site was partially working so we managed to start SAM Nagios at Oxford. Sam tests were unavailable for few hours but no affect on egi availibilty/reliability. Sites can have a look at https://mon.egi.eu/myegi/ss/ for a/r status.
Tuesday 29 Sep 2015
Following an air-conditioning problem at machine room in Oxford Tier-2 site on 26 September, gridppnagios(OX) was shut down and gridppnagios(Lancs) became active instance. Oxford site is in downtime until 1st Oct and it may be extended depending on the situation.
VO-Nagios was also unavailable for two days but we have started it yesterday as it is running on a VM. VO-nagios is using oxford SE for replication test so it is failing those tests. I am looking to change to some other SE.
|
VOs - GridPP VOMS VO IDs Approved VO table
|
Tuesday 19th May
- There is a current priority for enabling/supporting our joining communities.
Tuesday 5th May
- We have a number of VOs to be removed. Dedicated follow-up meeting proposed.
Tuesday 28th April
- For SNOPLUS.SNOLAB.CA, the port numbers for voms02.gridpp.ac.uk and voms03.gridpp.ac.uk have both been updated from 15003 to 15503.
Tuesday 31st March
- LIGO are in need of additional support for debugging some tests.
- LSST now enabled on 3 sites. No 'own' CVMFS yet.
|
Site Updates
|
Tuesday 23rd February
ALICE:
All okay.
RHUL 89%:89%
Lancaster 0%:0%
RALPP: 80%::80%
RALPP: 77%:77%
- RHUL: The largest problem was related to the SRM. The DPM version was upgraded and it took several weeks to get it working again (13 Jan onwards). Several short-lived occurrences of running out of space on the SRM for non-ATLAS VOs. For around 3 days (15-17 Jan) the site suffered from a DNS configuration error by their site network manager which removed their SRM from the DNS, causing external connections such as tests and transfers to fail. For one day (25 Jan) the site network was down for upgrade to the 10Gb link to JANET. Some unexpected problems occurred extending the interruption from an hour to a day. The link has been successfully commissioned.
- Lancaster: The ASAP metric for Lancaster for January is 97.5 %. There is a particular problem with ATLAS SAM tests which doesn’t affect the site activity in production and analysis and this relates to the path name being too long. A re-calculation has been performed.
- RALPP: Both CMS and LHCb low figures are due to specific CMS jobs overloading the site SRM head node. The jobs should have stopped now.
|
|