Difference between revisions of "Operations Bulletin Latest"

From GridPP Wiki
Jump to: navigation, search
()
 
(2,323 intermediate revisions by 33 users not shown)
Line 5: Line 5:
 
{| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;"
 
{| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;"
 
|-
 
|-
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Week commencing 2nd November 2015
+
| style="background-color: #d8e8ff; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Week commencing Monday 25th February 2019
 
|}
 
|}
  
Line 27: Line 27:
 
====== ======
 
====== ======
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
 +
'''Tuesday 18th June'''
 +
* DPM Workshop last week - come to this week's storage meeting for an in-depth look.
 +
** https://indico.cern.ch/event/776832/
 +
* DIRAC downtime this week due to the move to Slough - good luck!
 +
* EGI Ops meeting this week.
 +
** https://wiki.egi.eu/wiki/Agenda-2019-06-17
 +
** HTCondorCE commissioning ongoing, SRM decommissioning survey
 +
** State of SRM usage at DPM sites?
 +
 +
 +
'''Tuesday 11th June'''
 +
 +
* Technical Meeting last week about the New JSON based Information System:  https://indico.cern.ch/event/821105/
 +
* This week we will get round to looking at the outcome of the Security Day (and HEPSYSMAN).
 +
* The DPM Workshop is this week: https://indico.cern.ch/event/776832/ There's a Vidyo Room planned for people to listen in.
 +
* CentOS7 Migration https://twiki.cern.ch/twiki/bin/view/AtlasComputing/CentOS7Deployment
 
<!-- ***********************Start General text*********************** ----->'''
 
<!-- ***********************Start General text*********************** ----->'''
'''Tuesday 3rd November'''
+
'''Tuesday 4th June 2019'''
* There is a GDB this week: [https://indico.cern.ch/event/319753/ Agenda].
+
* The Security Day  + HEPSYSMAN was on t'other week: https://indico.cern.ch/event/721692/
* (re)introduction of the STRICT_RFC2818 mechanism in Globus.... See Jens's comments on TB-SUPPORT.
+
* Please can sites review their GOCDB information: https://ggus.eu/?mode=ticket_info&ticket_id=141296
* Re-allocation of space for ATLAS as a result of cleanup and removal of the ATLASPRODDISK space token.
+
* iris.ac.uk VO - Andrew explained this.
* Time out of Nagios glexec probe.
+
* New(-ish) HEPOSLib release - 7.2.9 https://twiki.cern.ch/twiki/bin/view/LCG/WLCGBaselineTable
* The GridPP hardware allocations are just about final so expect figures very soon. Purchases are this financial year.
+
* Gareth's query about the WS interface to ARC on TB-SUPPORT
* DPM workshop 2015 7th-8th Dec at CERN - [https://indico.cern.ch/event/432642/ Registration is open].
+
* Anything else?
* The [http://wlcg-sam.cern.ch/reports/2015/201510/wlcg/ October WLCG A/R] figures are now available
+
** [http://wlcg-sam.cern.ch/reports/2015/201510/wlcg/WLCG_All_Sites_ALICE_Oct2015.pdf ALICE]. All okay.
+
** [http://wlcg-sam.cern.ch/reports/2015/201510/wlcg/WLCG_All_Sites_ATLAS_Oct2015.pdf ATLAS].
+
*** QMUL: 85%:85%
+
*** Lancaster: N/a:N/a?
+
*** Liverpool: 85%:100%
+
*** Sheffield: 78%:78%
+
**[http://wlcg-sam.cern.ch/reports/2015/201510/wlcg/WLCG_All_Sites_CMS_Oct2015.pdf CMS]. All okay.
+
** [http://wlcg-sam.cern.ch/reports/2015/201510/wlcg/WLCG_All_Sites_LHCB_Oct2015.pdf LHCb]
+
*** QMUL: 79%:79%
+
*** Liverpool: 86%:100%
+
*** Sheffield: 86%:86%
+
*** RAL PPD: 79%:79%
+
* Exploring options for VM based sites (in respect of the monitoring within EGI): Perhaps setup a 'community platform'.
+
* RCUK Cloud Working Group - a first [http://bit.ly/cloudwgdec15 workshop on the 1st December] at Imperial College.
+
* From the MB last week:
+
** Memory Requirements: LHC experiments all basically agreed that 2GB/core was the baseline but that some (advertised) resources with up to 4GB/core would be valuable for some workflows.
+
** February as kick-off for technical evolution groups.
+
**  PCP - Pre-commercial-procurement and HNSciCloud. This is approved and starts January-16. UK has small involvement. The plan is to build on the hybrid cloud service that results, in order to deploy a European Open Science Cloud funded from the INFRADEV-04 (2016) call
+
  
'''Tuesday 27th October'''
 
* HEPiX took place last week: [https://indico.cern.ch/event/384358/timetable/#all.detailed Agenda].
 
* pheno: VO ID card for SW_DIR. For contacts check [http://operations-portal.egi.eu/vo/help this link].
 
* [https://indico.cern.ch/e/453272 GridPP Technical Meeting] on Friday "virtual only".
 
* The latest WLCG [https://twiki.cern.ch/twiki/bin/view/LCG/WLCGDailyMeetingsWeek151026#Monday ops meeting minutes from Monday are available].
 
* There was a UK-T0 meeting last week. The other community talks may be of interest, they are linked from the [https://eventbooking.stfc.ac.uk/news-events/uk-t0-workshop-296?agenda=1 agenda].
 
* Current GridPP vacancies (+ putting them on the GridPP website).
 
* David Crooks is organising a "SOC" meeting.
 
  
* Raja raised: "ARC CE publishing" and querying the BDII.
 
* Luke asked about HTC CE documentation links (for European installation).
 
* John H asked for comments on a dmlite message after update to dpm 1.8.10.
 
  
 
<!-- **********************End General text************************** ----->
 
<!-- **********************End General text************************** ----->
Line 82: Line 68:
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
 
<!-- ***********************Start ops coord text*********************** ----->
 
<!-- ***********************Start ops coord text*********************** ----->
 +
'''Tuesday 11th June'''
  
'''Tuesday 27th October'''
+
* I got stuck (figuratively) in my machine room last Thursday afternoon so missed it. [https://indico.cern.ch/event/823800/ Agenda.] [https://twiki.cern.ch/twiki/bin/view/LCG/WLCGOpsMinutes190606 Minutes.] Ste was there - any observations?
* 13th MW Readiness WG meeting THIS Wed 28/10 @ 4pm CET in CERN room 28-S-023 or via vidyo.
+
 
* There was a WLCG ops coordination meeting last Thursday: [https://indico.cern.ch/event/393618/ Agenda] - [https://twiki.cern.ch/twiki/bin/view/LCG/WLCGOpsMinutes151022 Minutes].  
+
'''Tuesday 14th May'''
 +
* Next meeting this Thursday.
 +
 
 +
'''Tuesday 9th April'''
 +
* Ops meeting last week: https://twiki.cern.ch/twiki/bin/view/LCG/WLCGOpsMinutes190404
 +
 
 +
 
 +
'''Tuesday 26th March'''
 +
* For information, there was an Ops meeting on the 7th: https://twiki.cern.ch/twiki/bin/viewauth/LCG/WLCGOpsMinutes190307 (I think we might have dicussed this one).
 +
* Next meeting booked for 4th April
 +
 
 +
'''Monday 11th February 2019'''
 +
* There was a WLCG ops coordination meeting last Thursday. You can view the meeting notes [https://twiki.cern.ch/twiki/bin/view/LCG/WLCGOpsMinutes190207 here].
 +
** End of CREAM support Dec 2020.
 +
** Migration from CREAM to be discussed at the EGI conference 6-8 of May in Amsterdam.
 +
** Operational intelligence discussion ([https://indico.cern.ch/event/795889/sessions/302520/attachments/1792600/2920978/Operational_Intelligence_-_WLCG_Ops_Coord-2.pdf see slides]).
 +
** Experiment updates (see ATLAS discussion on DOMA)
 +
** WG updates (few).
  
'''Next Meeting is scheduled for Thursday 22nd October'''
 
  
'''Tuesday 6th October'''
 
* There was a WLCG ops coordination meeting last week. [https://twiki.cern.ch/twiki/bin/view/LCG/WLCGOpsMinutes151001 Minutes]. [https://indico.cern.ch/event/393617/ Agenda] (which has John Gordon's accounting slides).
 
* The highlights:
 
** dCache sites should install the latest fix for SRM solving a vulnerability
 
** All sites hosting a regional or local site xrootd should updgrade it at least to version 4.1.1
 
** CMS DPM sites should consider upgrading dpm-xrootd to version 3.5.5 now (from epel-testing) or after mid October (from epel-stable) to fix a problem affecting AAA
 
** Tier-1 sites should do their best to avoid scheduling OUTAGE downtimes at the same time as other Tier-1's supporting common LHC VOs. A calendar will be linked in the minutes of the 3 o'clock operations meeting to easily find out if there are already downtimes at a given date
 
** The multicore accounting for WLCG is now correct for the 99.5% of the CPU time, with the few remaining issues being addressed. Corrected historical accounting data is expected to be available from the production portal by the end of the month
 
** All LHCb sites will soon be asked to deploy the "machine features" functionality
 
  
'''Tuesday 22nd September'''
 
* There was an ops coordination meeting last Thursday: [https://twiki.cern.ch/twiki/bin/view/LCG/WLCGOpsMinutes150917 Minutes].
 
* Highlights:
 
**  All 4 experiments have now an agreed workflow with the T0 for tickets that should be handled by the experiment supporters and were accidentally assigned to the T0 service managers.
 
**  A new FTS3 bug fixing release 3.3.1 is now available.
 
**  A globus lib issue is causing problems with FTS3 for sites running IPv6.
 
** The rogue Glasgow configuration management tool replacing the current configuration for VOMS with the old one was picked up and unfortunately discussed as though sites had not got the message about using the new VOMS.
 
**  No network problems experienced with the transatlantic link despite 3 out of 4 cables being unavailable.
 
**  T0 experts are investigating the slow WN performance reported by LHCb and others.
 
**  A group of experts at CERN and CMS investigate ARGUS authentication problems affecting CMS VOBOXes.
 
**  T1 & T2 sites please observe the actions requested by ATLAS and CMS (also on the WLCG Operations portal).
 
* Actions for [https://wlcg-ops.web.cern.ch/sys-admins/tasks Sites]; [https://wlcg-ops.web.cern.ch/experiments/tasks Experiments].
 
  
'''Tuesday 15th September'''
 
* The [https://indico.cern.ch/event/393616/ next WLCG ops coordination meeting is this Thursday 17th September]. Are there any Tier-2 issues we wish to raise? Minutes will appear [https://twiki.cern.ch/twiki/bin/view/LCG/WLCGOpsMinutes150917 here].
 
  
  
Line 131: Line 111:
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
 
<!-- ***********************Start T1 text*********************** ----->
 
<!-- ***********************Start T1 text*********************** ----->
'''Tuesday 3rd November'''
+
 
A reminder that there is a weekly [http://www.gridpp.ac.uk/wiki/RAL_Tier1_Experiments_Liaison_Meeting Tier-1 experiment liaison meeting]. Notes from the last meeting [https://www.gridpp.ac.uk/wiki/Tier1_Operations_Report_2015-10-28 here]
+
'''17 June 2019''' Report for the Experiments Liaison Report (17/06/2019) is [https://www.gridpp.ac.uk/wiki/Tier1_Operations_Report_2019-06-17 here].
* A week ago we had two disk server failures over the weekend (Both part of Atlas Tape). The servers have had disks replaced and re-run the acceptance testing. We anticipate their return to service very soon. There is also one further disk server out of service in AtlasTape from the moment.
+
<!-- *********************************************************** ----->
* We are investigating why LHCB batch jobs sometimes fail to write results back to Castor (and the sometimes fail to write remotely as well).
+
* As reported this last couple of weeks - we did have a problem with glexec for the worker nodes over a weekend. We are still trying to understand why this problem was not seen during the testing and roll-out of the new worker node configuration.
+
 
<!-- **********************End T1 text************************** ----->
 
<!-- **********************End T1 text************************** ----->
 +
* Ongoing, we are seeing high outbound packet loss over IPv6.  Central networking performed a firmware update to the border routers but this didn’t resolve the issue.  Plan to move connections to the new border routers in Mid June.  Will do this before trying to debug any further.
 +
 +
'''11 June 2019''' Report for the Experiments Liaison Report (10/06/2019) is [https://www.gridpp.ac.uk/wiki/Tier1_Operations_Report_2019-06-10 here].
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
 +
<!-- **********************End T1 text************************** ----->
 +
* Ongoing, we are seeing high outbound packet loss over IPv6.  Central networking performed a firmware update to the border routers but this didn’t resolve the issue.  Plan to move connections to the new border routers in Mid June.  Will do this before trying to debug any further.
 +
* Three certificates were revoked mistakenly on ARGUS on Thursday.  All SAM tests failed until this was fixed the next morning.  Batch farm also did not start any new jobs during this time.  We used this accidental draining to reboot nodes that needed to pick up security patching.
 +
* LHCb Castor instance has been completely disabled for LHCb and will be decommissioned.
 +
* Brian Davies has transferred from his role as GridPP Tier-2 Storage Support Officer and has joined the Tier-1 Production Team.  Although this has happened with immediate effect he will still be available for ad-hoc/informal storage support.
 +
 
|}
 
|}
<!-- ****************End T1****************** ----->
 
  
 
<!-- ****************Start Storage & DM****************** ----->
 
<!-- ****************Start Storage & DM****************** ----->
Line 151: Line 137:
 
<!-- ******************Edit start********************* ----->
 
<!-- ******************Edit start********************* ----->
  
'''[http://storage.esc.rl.ac.uk/weekly/20151028-minutes.txt Wednesday 28 Oct]'''
+
'''[http://storage.esc.rl.ac.uk/weekly/20191030-minutes.txt Wed 30 Oct]'''
* Summary of "UK T0" workshop - GridPP well represented
+
* DOME upgrade problems at Edinburgh
* Sites should not upgrade to DPM 1.8.10 just yet
+
* Data management support/development for IRIS users
  
'''[http://storage.esc.rl.ac.uk/weekly/20150902-minutes.txt Wednesday 02 Sep]'''
+
'''[http://storage.esc.rl.ac.uk/weekly/20191023-minutes.txt Wed 23 Oct]'''
* Catch up with [http://www3.imperial.ac.uk/highenergyphysics/research/experiments/mice MICE]
+
* Rucio reporting
* How to do transfers of '''lots''' of files with FTS3 without the proxy timing out (in particularly if you need it vomsified)
+
  
'''[http://storage.esc.rl.ac.uk/weekly/20150812-minutes.txt Wednesday 12 Aug]'''
+
'''Wed 16 Oct'''
* sort of housekeeping: data cleanups, catalogue synchronisation - in particular namespace dumps for VOs
+
* CEPH workshop at CERN report
* GridPP storage/data at future events; GridPP35 and Hepix and Cloud data events
+
  
'''[http://storage.esc.rl.ac.uk/weekly/20150708-minutes.txt Wednesday 08 July]'''
+
'''[http://storage.esc.rl.ac.uk/weekly/20191002-minutes.txt Wed 02 Oct]'''
* Huge backlog of ATLAS data from Glasgow waiting to go to RAL, and oddly varying performance numbers - investigating
+
* Safe to upgrade to DPM 1.13 but make sure the BDII is working if you support DIRAC
* How physics data is like your Windows 95 games
+
* Roadmap for xroot and http TPC for RAL FTS(es)
  
'''[http://storage.esc.rl.ac.uk/weekly/20150701-minutes.txt Wednesday 01 July]'''
+
'''[http://storage.esc.rl.ac.uk/weekly/20190925-minutes.txt Wed 25 Sept]'''
* Feedback on CMS's proposal for listing contents of storage
+
* Storage support for IRIS VOs?
* Simple storage on expensive raided disks vs complicated storage on el cheapo or archive drives?
+
 
 +
'''[http://storage.esc.rl.ac.uk/weekly/20190918-minutes.txt Wed 18 Sept]'''
 +
* Report from yesterday's Rucio Face Meeting at Coseners
 +
* Suggestions for following up from yesterday's CEPH day hosted by CERN
 +
 
 +
'''[http://storage.esc.rl.ac.uk/weekly/20190911-minutes.txt Wed 11 Sept]'''
 +
* Storage related stuff at the FNAL (pre-)GDBs
 +
* DOME upgrade tickets for non-DOME DPM sites
 +
 
 +
'''[http://storage.esc.rl.ac.uk/weekly/20190904-minutes.txt Wed 04 Sept]'''
 +
* Banning in SSC not entirely successful in non-DOME DPM, and end of support is nigh; tickets to upgrade will go out shortly.
 +
* Storage-and-data-management-wise, GridPP43 was interesting although no-one volunteered to install the next CEPH.
  
  
Line 186: Line 181:
 
====== ======
 
====== ======
 
<!-- ******************Edit start********************* ----->
 
<!-- ******************Edit start********************* ----->
 
+
'''Tuesday 30 Apr 2019'''
'''Tuesday 3 Nov'''
+
* ATLAS CernVM4 VMs (equivalent to CentOS 7.6) being tested
* LHCb prototype of GOCDB pointers to resource BDII done
+
* T2C tests at Oxford ongoing
+
 
+
'''Tuesday 20 Oct'''
+
* LHCb multipilot VMs now in production
+
* Support for APEL-Sync records in Vac, but need to co-ordinate with APEL team to validate it. This is to allow pure-VM sites like UCL to pass APEL SAM tests (GRIDPP-10)
+
* Last GridPP Technical Meeting decided to test disk-less operation at Oxford for CMS (GRIDPP-20) and LHCb (GRIDPP-21).
+
 
+
'''Tuesday 6 Oct'''
+
* UCL Vac site now running LHCb test of two payloads per dual processor VM. Total of dual processor VMs at UCL now 120.
+
 
+
'''Tuesday 29 Sep'''
+
* UCL Vac site updated with most recent version of Vac-in-a-Box. Now running ~216 jobs: LHCb MC and ATLAS certification jobs.
+
* Drawing up list of tasks needed to be able to run a site for GridPP-supported VOs purely using VMs (e.g. VM certification by experiments etc.)
+
* Discussion at GridPP Technical Meeting on storage options, including xrootd-based sites (i.e. xrootd not DPM/dCache)
+
 
+
'''Tuesday 22 Sep'''
+
* Fortnightly [https://indico.cern.ch/category/4454/ GridPP Technical Meetings] on Fridays will have Tier-2 Evolution discussions, starting on Fri 25 Sept.
+
 
+
'''Thursday 17 Sep'''
+
* Task force to start developing advice for sites to simplify their operation in line with "6.2.5 Evolution of Tier-2 sites" in the GridPP5 proposal.
+
* Mailing list for Tier-2 evolution activities: gridpp-t2evo@cern.ch - anyone welcome to join
+
* Also [https://its.cern.ch/jira/browse/GRIDPP/ GridPP project] on the CERN JIRA service for tracking actions. Can be used with a full or lightweight CERN account. You need to be added manually or on the gridpp-ops@cern.ch mailing list to browse issues.
+
  
 
<!-- ******************Edit stop********************* ----->
 
<!-- ******************Edit stop********************* ----->
Line 227: Line 199:
 
===== =====
 
===== =====
 
<!-- ******************Edit start********************* ----->
 
<!-- ******************Edit start********************* ----->
'''Tuesday 3rd November'''
+
'''Tuesday 6th February'''
* APEL delay (normal state) Lancaster and Sheffield.
+
* HEPSPEC06 on recent Intel CPUs. [https://www.gridpp.ac.uk/wiki/HEPSPEC06 GridPP benchmarking page]
  
'''Tuesday 20th October'''
 
The WLCG MB decided to create a Benchmarking Task force led by Helge Meinhard see [https://indico.cern.ch/event/433303/contribution/9/attachments/1154494/1658853/2015-09-15-LCGMB-Benchmarking.pdf talk]
 
  
'''Tuesday 22nd September'''
+
'''Tuesday 30th January'''
* Slight delay for Sheffield but overall okay - although there is a gap between today's date and the most recent update for all sites. Perhaps an APEL delay.
+
* Please keep updating the [https://www.gridpp.ac.uk/wiki/HEPSPEC06 GridPP benchmarking page].
  
'''Monday 20th July'''
+
'''Tuesday 24th Oct'''
* Oxford publishing 0 cores from Cream today. Maybe they forgot to switch one off. [http://goc-accounting.grid-support.ac.uk/apel/jobs2_withsubmithost.html Check here].  
+
* A talk on end-to-end validation of any site's APEL accounting was presented at the WLCG Accountuing Taskforce meeting, 19th Oct. There slides exist here: https://indico.cern.ch/event/673843/contributions/2756986/attachments/1542818/2420233/blackBoxAccTesting.pdf
  
* A reminder to keep updating the [https://www.gridpp.ac.uk/wiki/HEPSPEC06 HEPSPEC06 tables].
+
 
 +
'''Monday 16th January'''
 +
* The discussion topic for next week will be accounting comparisons. Please note Alessandra's comments last week.
 +
 
 +
'''Monday 14th November'''
 +
* Alessandra has written an [https://twiki.cern.ch/twiki/bin/view/LCG/AccountingFAQ FAQ] to extract numbers from ATLAS and APEL avoiding the SSB.
 +
 
 +
'''Monday 26th September'''
 +
* A problem with the APEL Pub and Sync tests developed last Tuesday and was resolved on Wednesday. This had a temporary impact on the accounting portal.
  
 
* [http://accounting.egi.eu/egi.php?SubRegion=1.67&query=normcpu&startYear=2014&startMonth=8&endYear=2014&endMonth=9&yRange=SITE&xRange=VO&voGroup=lhc&chart=GRBAR&scale=LIN&localJobs=onlygridjobs APEL status]: An issue at Sheffield?
 
* [http://accounting.egi.eu/egi.php?SubRegion=1.67&query=normcpu&startYear=2014&startMonth=8&endYear=2014&endMonth=9&yRange=SITE&xRange=VO&voGroup=lhc&chart=GRBAR&scale=LIN&localJobs=onlygridjobs APEL status]: An issue at Sheffield?
Line 250: Line 228:
 
{| style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0.5em 1em 0;"
 
{| style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0.5em 1em 0;"
 
|-
 
|-
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: left; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-left: 0.4em; padding-top: 0.1em; padding-bottom: 0.1em;" | Documentation - [https://www.gridpp.ac.uk/php/KeyDocs.php?sort=area KeyDocs]
+
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: left; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-left: 0.4em; padding-top: 0.1em; padding-bottom: 0.1em;" | Documentation - [https://www.gridpp.ac.uk/keydocs?sort=area KeyDocs]
 
|-
 
|-
 
| style="padding: 0.4em 0.4em 0.4em 0.4em;" |
 
| style="padding: 0.4em 0.4em 0.4em 0.4em;" |
Line 258: Line 236:
 
<!-- ******************Edit start********************* ----->
 
<!-- ******************Edit start********************* ----->
  
'''Tuesday 20th October, 2015'''
+
''' Tue 9th July 2019'''
 +
 
 +
LHCb has added this to their requirements:
 +
 
 +
Sites not having an SRM installation must provide:
 +
 
 +
* disk only storage
 +
* a GRIDFPT endpoint (a single dns entry)
 +
* an XROOT endpoint (a single dns entry)
 +
* a way to do the accounting (preferably following the WLCG TF standard: https://twiki.cern.ch/twiki/bin/view/LCG/StorageSpaceAccounting)
 +
 
 +
''' Tue 16th April 2019'''
 +
 
 +
Minor change to LHCB requirements in Approved VOs:
 +
 
 +
* Sites having migrated to the Centos7 (including "Cern Centos7") operating system or  later versions are requested to provide support for singularity containers.
 +
 
 +
https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs#VO_Resource_Requirements
 +
 
 +
'''Tue 2nd April 2019'''
 +
 
 +
Changes to Approved for DUNE (and LZ)
 +
 
 +
There is a new "Approved VOs" document showing new settings for DUNE. And, since both LZ's voms servers now show up in the EGI Portal, I've removed the (now spurious) entry for voms.hep.wisc.edu that was formerly being inserted by hand.
 +
 
 +
https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs
 +
 
 +
I've also created a new set of RPMs (each has all LSC, VOMS, and XML for one VO) for all approved VOs. See Approved VOs doc for details, version is 1.9.
 +
 
 +
Sites with DUNE services need to update using whatever method they employ.
 +
 
 +
Special note: Since voms1.fnal.gov changed yesterday, I've updated documentation for that right away. But voms2.fnal.gov remains as it is until 23rd April (3 weeks). I'll update documentation for that when it is closer. Staggering the updates like this gives a time margin, so any particular site can have at least one service properly configured at any time.  But it does mean that two site updates are needed; one now, and one in three weeks. However, since it's sufficient for only one (of the two) voms servers to be configured properly, sites could save effort and wait until (say) 19th April, then update both at once. But I can't dictate what site admins should do. It's your call.
 +
 
 +
Ste
 +
 
 +
 
 +
'''Tuesday 5th Mar 2019'''
 +
New VOMs details for Biomed:
 +
https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs
 +
 
 +
New VOMs rpms to match.
 +
 
 +
http://hep.ph.liv.ac.uk/~sjones/RPMS.voms/
 +
 
 +
'''Tuesday 12th Feb 2019'''
 +
Documentation done for HTCondor-CE apel accounting.
 +
https://twiki.cern.ch/twiki/bin/view/LCG/HtCondorCeAccounting
  
Approved VOs document updated with temporary section for [https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs#Approved_VOs_in_the_process_of_being_established LZ]
 
  
'''Tuesday 29nd September'''
+
'''Tuesday 12th Feb 2019'''
Steve J: problems with voms server at fnal, voms.fnal.gov, have been detected; I will resolve them soon and may issue an update to Approved VOs, alerting sites with TB_SUPPORT should that occur. Approved VOs potentially affected are CDF, DZERO, LSST. Please do not act act yet.
+
New CA DN for Biomed
 +
<pre>
 +
< VOMS_CA_DN="'/C=FR/O=CNRS/CN=GRID2-FR' "
 +
---
 +
> VOMS_CA_DN="'/C=FR/O=MENESR/OU=GRID-FR/CN=AC GRID-FR Services' "
 +
</pre>
  
'''Tuesday 22nd September'''
+
'''Tuesday 5th Feb 2019'''
* Steve J is going to undertake some GridPP/documentation usability testing.  
+
For enmr, certificate of voms-02.pd.infn.it
  
'''Tuesday 18th August'''
+
New DN: /DC=org/DC=terena/DC=tcs/C=IT/L=Frascati/O=Istituto Nazionale di Fisica Nucleare/CN=voms-02.pd.infn.it,
* Lydia's document - Setup a system to do data archiving using FTS3
+
New CA_DN: /C=NL/ST=Noord-Holland/L=Amsterdam/O=TERENA/CN=TERENA eScience SSL CA 3
  
'''Tuesday 28th July'''
+
Please check approved VOs: https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs
* Ewan: /cvmfs/gridpp-vo help ... there's a lot of historical stuff on the GridPP wiki that makes it look a lot more complicated than it is now. We really should have a bit of a clear out at some point.
+
  
'''Tuesday 23rd June'''
 
* Reminder that documents need reviewing!
 
  
  
 
'''General note'''
 
'''General note'''
  
See the [https://www.gridpp.ac.uk/php/KeyDocs.php?sort=reviewed worst KeyDocs list] for documents needing review now and the names of the responsible people.
+
See the [https://www.gridpp.ac.uk/keydocs?sort=reviewed worst KeyDocs list] for documents needing review now and the names of the responsible people.
  
 
|}
 
|}
Line 288: Line 313:
 
{| style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0.5em 1em 0;"
 
{| style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0.5em 1em 0;"
 
|-
 
|-
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: left; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-left: 0.4em; padding-top: 0.1em; padding-bottom: 0.1em;" | Interoperation - [https://wiki.egi.eu/wiki/Grid_Operations_Meetings EGI ops agendas]
+
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: left; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-left: 0.4em; padding-top: 0.1em; padding-bottom: 0.1em;" | Interoperation - [https://wiki.egi.eu/wiki/Operations_Meeting EGI ops agendas] [https://indico.egi.eu/indico/category/32/ Indico schedule]
 
|-
 
|-
 
| style="padding: 0.4em 0.4em 0.4em 0.4em;" |
 
| style="padding: 0.4em 0.4em 0.4em 0.4em;" |
Line 295: Line 320:
 
<!-- ******************Edit start********************* ----->
 
<!-- ******************Edit start********************* ----->
  
''' Tuesday 20th October
+
''' 14 May 2019 '''
 +
* Meeting for May cancelled. Next meeting is on 10th June.
  
Meeting last week: https://wiki.egi.eu/wiki/Agenda-12-10-2015
+
''' Monday 11th March '''
 +
 
 +
* Agenda: https://wiki.egi.eu/wiki/Agenda-2019-03-11 
 +
* UMD 4.8.2 released :  ARC 15.03.19 - To address how ARC counts HELD job
 +
* DPM Legacy mode is going to end in June 2019
 +
* EGI will open tickets after June
 +
* HTCondor CE accounting status?
 +
* http://egi.ui.argo.grnet.gr/ : https://ggus.eu/index.php?mode=ticket_info&ticket_id=139877 
 +
 
 +
 
 +
''' Tuesday 12th February '''
 +
* There was an EGI ops meeting yesterday - [https://indico.egi.eu/indico/event/4320/ Agenda]. We were asked to review the IPv6 readiness information ([https://wiki.egi.eu/w/index.php?title=IPV6_Assessment see here]). We should perhaps link in the GridPP IPv6 table for updates status.
 +
 
 +
 
 +
''' Friday 1st February '''
 +
 
 +
* Early adopters needed for HTcondor CE , https://ggus.eu/index.php?mode=ticket_info&ticket_id=139377
 +
 
 +
''' Thursday 17th January EGI OMB meeting '''
 +
 
 +
* CREAMCE to be out of support by Dec 2020
 +
* More effort to fix accounting issue of HTCondor CE
 +
 
 +
''' Monday 14th January 2019 '''
 +
* Agenda https://wiki.egi.eu/wiki/Agenda-2019-01-14
 +
 
 +
* No UK specific issue mentioned.
 +
 
 +
* IPv6 readiness plan is going to be summarized at OMB.
 +
 
 +
'
  
  
Line 314: Line 370:
 
===== =====
 
===== =====
 
<!-- ******************Edit start********************* ----->
 
<!-- ******************Edit start********************* ----->
 +
'''Tuesday 4th July'''
 +
* There were a number of useful links provided in the monitoring talks at the WLCG workshop in Manchester - especially those in the [https://indico.cern.ch/event/609911/timetable/#20170621 Wednesday sessions].
 +
 +
'''Monday 13th February'''
 +
* This category is pretty much inactive. Are there any topics under "monitoring" that anyone wants reported at this ops meeting? If not we will remove this section from the regular updates area of the bulletin and just leave the main links.
 +
 +
'''Tuesday 1st December'''
 +
* Sites are kindly invited to update the monitoring status page at https://www.gridpp.ac.uk/wiki/Site_monitoring_status
 +
 +
 
'''Tuesday 16th June'''
 
'''Tuesday 16th June'''
 
* F Melaccio & D Crooks decided to add a [https://www.gridpp.ac.uk/wiki/Monitoring_FAQs FAQs section] devoted to common monitoring issues under the monitoring page.
 
* F Melaccio & D Crooks decided to add a [https://www.gridpp.ac.uk/wiki/Monitoring_FAQs FAQs section] devoted to common monitoring issues under the monitoring page.
Line 320: Line 386:
  
  
''' Tuesday 31st March
 
 
* Glasgow Graphite/Grafana documentation: http://www.scotgrid.ac.uk/graphite/
 
 
'''Monday 7th December
 
 
* Meeting last Friday - agenda: https://indico.cern.ch/event/356853/ minutes: https://indico.cern.ch/event/356853/material/minutes/1.pdf
 
* This was the wrap-up meeting of the consolidation TF; the mailing list will remain extant for a while yet.
 
 
<!-- ******************Edit stop********************* ----->
 
<!-- ******************Edit stop********************* ----->
 
|}
 
|}
Line 341: Line 399:
 
===== =====
 
===== =====
 
<!-- ******************Edit start********************* ----->
 
<!-- ******************Edit start********************* ----->
 +
'''Tuesday 5th February'''
 +
* Birmingham decommissioning of SRM and BDII still going on so tickets are on hold.
 +
* Few availability tickets on hold
 +
* Lancaster has WebDav ticket on hold which seems to be effect of DOME rollout
  
'''Tuesday 3rd November'''
+
'''Tuesday 14th August'''
* The long-standing UCL availability alarm went green yesterday on 29th October. We are not sure why!
+
* A couple of new availability tickets (QMUL and Lancaster), both for well-publicised reasons. Otherwise quiet. AM on shift this week.
* Quite a lot of activity on the dashboard this week, but only one or two new tickets.
+
* Tickets: Five for availability / reliability: Sussex, Sheffield, Liverpool, Lancaster and UCL. Two for GLUE2 validation: Liverpool and QMUL. One for the CEs at QMU
+
  
'''Tuesday 20th October'''
 
 
Lots of bits here and there, but no big pattern. Tickets about CE and storage problems open at several sites. QMUL notable as going on for a while, probably with some kind of configuration problem they're not identifying.
 
 
 
'''Tuesday 6th October'''
 
* With the exception of the dashboard getting really confused early in the week as the Nagios instances at Oxford and Lancaster came and went, it's been a fairly quiet week.  There are four outstanding tickets:
 
** Three for availability / reliabaility (Sussex, Liverpool and Lancaster).
 
** One at Bristol for a GridFTP transfer problem.
 
 
'''Tuesday 15th September'''
 
* Generally quiet. QMUL have some grumblyness with the CEs. However, I understand much of this is caused by the batch farm being busy. There are low-availability tickets 'on hold' for Liverpool and UCL.
 
  
 
<!-- ******************Edit stop********************* ----->
 
<!-- ******************Edit stop********************* ----->
Line 371: Line 419:
 
===== =====
 
===== =====
 
<!-- ******************Edit start********************* ----->
 
<!-- ******************Edit start********************* ----->
'''Tuesday 15th September'''
+
'''Monday 20th November'''
* [http://repository.egi.eu/2015/09/10/release-umd-3-13-3/ UMD 3.13.3 is available].
+
* IPv6: https://www.gridpp.ac.uk/wiki/IPv6_site_status (up-to-date as of November)
 +
* Batch systems and WN moves to SL7/CentOS7: https://www.gridpp.ac.uk/wiki/Batch_system_status (added column for date of last update)
 +
 
  
'''Tuesday 12th May'''
 
* MW Readiness WG meeting Wed May 6th at 4pm. Attended by Raul, Matt, Sam and Jeremy.
 
  
'''Tuesday 17th March'''
 
* Daniela has updated the [ https://www.gridpp.ac.uk/wiki/Staged_rollout_emi3 EMI-3 testing table]. Please check it is correct for your site. We want a clear view of where we are contributing.
 
* There is a middleware readiness meeting this Wednesday. Would be good if a few site representatives joined.
 
* Machine job features solution testing. Fed back that we will only commence tests if more documentation made available. This stops the HTC solution until after CHEP. Is there interest in testing other batch systems? Raul mentioned SLURM. There is also SGE and Torque.
 
  
'''References'''
+
'''Historical References'''
  
 
* Staged Rollout pages (now separated into EMI1 & 2), and the page listing the deployed versions is extractable from the bdii, so they should all be reasonably up-to-date:
 
* Staged Rollout pages (now separated into EMI1 & 2), and the page listing the deployed versions is extractable from the bdii, so they should all be reasonably up-to-date:
Line 398: Line 442:
 
{| style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0.5em 1em 0;"
 
{| style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0.5em 1em 0;"
 
|-
 
|-
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: left; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-left: 0.4em; padding-top: 0.1em; padding-bottom: 0.1em;" | Security - [http://www.gridpp.ac.uk/security/inchand/Incident.html Incident Procedure] [http://www.gridpp.ac.uk/security/policies/index.html Policies] [https://www.gridpp.ac.uk/wiki/SD_rota Rota]
+
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: left; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-left: 0.4em; padding-top: 0.1em; padding-bottom: 0.1em;" | Security - [https://www.gridpp.ac.uk/wiki/Report_Security_Incident Incident Procedure] [https://wiki.egi.eu/wiki/SPG:Documents Policies] [https://www.gridpp.ac.uk/wiki/SD_rota Rota]
 
|-
 
|-
 
| style="padding: 0.4em 0.4em 0.4em 0.4em;" |
 
| style="padding: 0.4em 0.4em 0.4em 0.4em;" |
  
 
===== =====
 
===== =====
'''Tuesday 3rd November'''
+
'''Tuesday 11th June
* EGI SVG Advisory - Various Java CVE's with max CVSS score.
+
  
'''Monday 26th October'''
+
* NTR
* Updated IGTF distribution version [ https://dist.igtf.net/distribution/igtf/current/ 1.69 available].
+
* Next UK Security Team meeting scheduled for 28th Oct.
+
 
+
'''Tuesday 20th October'''
+
* The IGTF has released a regular update to the trust anchor repository ([ https://rt.egi.eu/rt/Ticket/Display.html?id=9668 1.69]) - for distribution ON OR AFTER October 26t
+
 
+
'''Tuesday 13th October'''
+
* Nothing to report
+
* Next UK Security Team meeting scheduled for 28th Oct.
+
 
+
'''Monday 5th October'''
+
* Updated IGTF distribution version 1.68 available - https://dist.igtf.net/distribution/igtf/current/
+
* Update on incident broadcast EGI-20150925-01 relating to compromised systems in China. - The EGI, WLCG and VO security teams are continuing their investigations. Affected sites and users have been contacted and there is no present indication of further action needed by any site in the UK. However, as more information comes to light, additional updates may be made in the near future and sites are asked as always to read any updates carefully, taking actions as recommended.
+
 
+
'''Tuesday 29th September'''
+
* Incident broadcast EGI-20150925-01 relating to compromised systems in China.
+
* UK security team meeting scheduled for 30th Sept.
+
 
+
'''Monday 29th September'''
+
* IGTF has released a regular update to the trust anchor repository (1.68) - for distribution ON OR AFTER October 5th
+
 
+
 
+
The EGI [https://operations-portal.egi.eu/csiDashboard security dashboard].
+
  
 
|}
 
|}
Line 442: Line 462:
 
{| style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0em 0 0 0.3em;"
 
{| style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0em 0 0 0.3em;"
 
|-
 
|-
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: left; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-left: 0.4em; padding-top: 0.1em; padding-bottom: 0.1em;" | Services - [http://psmad.grid.iu.edu/maddash-webui/index.cgi?dashboard=Dual-Stack%20Mesh%20Config PerfSonar dashboard] | [https://voms.gridpp.ac.uk:8443/vomses/ GridPP VOMS]
+
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: left; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-left: 0.4em; padding-top: 0.1em; padding-bottom: 0.1em;" | Services - [https://psmad.opensciencegrid.org/maddash-webui/index.cgi?dashboard=UK%20Mesh%20Config PerfSonar production dashboard] |[https://psetf.opensciencegrid.org/etf/check_mk/index.py?start_url=%2Fetf%2Fcheck_mk%2Fview.py%3Fhostgroup%3DUK%26opthost_group%3DUK%26view_name%3Dhostgroup PerfSonar ETF] | [http://opensciencegrid.org/networking/ OSG Networking and perfSONAR pages] | [https://voms.gridpp.ac.uk:8443/vomses/ GridPP VOMS]
 
|-
 
|-
 
| style="padding: 0.4em 0.4em 0.4em 0.4em;" |
 
| style="padding: 0.4em 0.4em 0.4em 0.4em;" |
Line 450: Line 470:
 
- This includes notifying of (inter)national services that will have an outage in the coming weeks or will be impacted by work elsewhere. (Cross-check the Tier-1 update).
 
- This includes notifying of (inter)national services that will have an outage in the coming weeks or will be impacted by work elsewhere. (Cross-check the Tier-1 update).
  
'''Tuesday 6th October'''
+
'''Monday 5th March'''
* A reminder, the next [https://indico.cern.ch/event/401680/ LHCOPN and LHCONE joint meeting]: Science Park Amsterdam (NL) 28-29 of October 2015
+
* Next LHCOPN and LHCONE meeting: [https://indico.cern.ch/event/772031/ Umeå, Sweden, 4-5 Jun 2019]. Registration required.
 +
 
 +
'''Monday 10th September'''
 +
* Please check perfSONAR status [https://psetf.opensciencegrid.org/etf/check_mk/index.py?start_url=%2Fetf%2Fcheck_mk%2Fview.py%3Ffilled_in%3Dfilter%26host_regex%3Duk%26view_name%3Dsearchhost here] especially the mesh URL (should be http://psconfig.opensciencegrid.org/pub/auto/FQDN)
 +
 
 +
'''Monday 2nd July'''
 +
* Next LHCOPN and LHCONE meeting: [https://indico.cern.ch/event/725706/ Fermilab, Batavia US, 30-31October 2018]. Registration required.
 +
 
 +
'''Monday 30th April'''
 +
* The [https://indico.cern.ch/event/725706/ next LHCOPN and LHCONE joint meeting] will take place on Tuesday the 30th and Wednesday the 31st of October 2018
 +
 
 +
'''Monday 19th February'''
 +
 
 +
Please could sites upgrade their perfsonar hosts to CentOS7. Instructions are [https://opensciencegrid.github.io/networking/perfsonar/installation/ here]. Current OS versions [https://tinyurl.com/y9eode2b here].
  
'''Tuesday 14th July'''
+
* Duncan has recreated the UK perfSONAR mesh. [http://maddash.aglt2.org/maddash-webui/index.cgi?dashboard=UK%20Config Link here]!
* GridPP35 in September will have a part focus on networking and IPv6. This will include a review of where sites are with their deployment. Please try to firm up dates for your IPv6 availability between now and September. Please update the [https://gridpp.ac.uk/wiki/IPv6_site_status GridPP IPv6 status table].
+
  
  
Line 469: Line 501:
 
===== =====
 
===== =====
 
<!-- ******************Edit start********************* ----->
 
<!-- ******************Edit start********************* ----->
'''Monday 2nd November 2015, 13.30 GMT'''<br />
 
22 Open UK Tickets this week. First Monday of the Month, so all the tickets get looked at, however run of the mill they are.
 
  
First, the link to all the UK [http://tinyurl.com/nwgrnys '''tickets'''].
+
32 Open Tickets this week, which is an in depth a look as I've been able to take.
 
+
'''SUSSEX'''<br />
+
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116915 116915] (14/10)<br />
+
Low availability Ops ticket. On holded whilst the numbers sooth themselves. On Hold (23/10)
+
 
+
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116865 116865] (12/10)<br />
+
Sno+ job submission failures. Not much on this ticket since it was set In Progress. Looks like an argus problem. How goes things at Sussex before Matt RB moves on? (We'll miss you Matt!). In progress (20/10)
+
 
+
'''RALPP'''<br />
+
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=117261 117261] (28/10)<br />
+
Atlas jobs failing with stage out failures. Federico notices that the failures are due to odd errors - "file already existing", and that things seem to be calming themselves. He's at a loss of what RALPP can do. Checking the panda link suggests the errors are still there today. Waiting for reply (29/10)
+
 
+
'''BRISTOL'''<br />
+
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116775 116775] (6/10)<br />
+
Bristol's CMS glexec ticket. It looks like the solution is to have more cms pool accounts (which of course requires time to deploy). In progress (28/10)
+
 
+
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=117303 117303] (30/10)<br />
+
CMS, not Highlander fans, don't seem to believe that There can be only One (glexec ticket). Poor old Bristol seem to be playing whack-a-mole with duplicate tickets. Is there a note that can be left somewhere to stop this happening? Assigned (30/10)
+
 
+
'''ECDF'''<br />
+
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=95303 95303] (Long long ago)<br />
+
Edinburgh's (and indeed Scotgrid's) only ticket is this tarball glexec ticket. A bit more on this later. On hold (18/5)
+
 
+
'''SHEFFIELD'''<br />
+
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=114460 114460] (18/6)<br />
+
Gridpp (and other) VO pilot roles at Sheffield. No news for a while, snoplus are trying to use pilot roles now for dirac so this is becoming very relevant. In progress (9/10)
+
 
+
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116560 116560] (30/9)<br />
+
Sno+ jobs failing, likely due to too many being submitted to the 10 slots that Sno+ has. Maybe a WMS scheduling problem - Stephen B has given advice. Elena asked if the problem persisted a few weeks ago. Waiting for reply (12/10)
+
 
+
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116967 116967] (17/10)<br />
+
A ROD availability ticket, on hold as per SOP. On hold (20/10)
+
 
+
'''LANCASTER'''<br />
+
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116478 116478] (28/9)<br />
+
Another availability ticket. Autumn was not kind to many of us! On hold (8/10)
+
 
+
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116882 116882] (13/10)<br />
+
Enabling pilot snoplus users at Lancaster. Shouldn't have been a problem, but turned into a bit of a comedy/tragedy of errors by yours truly mucking up. Hopefully fixed now- thanks to Daniela for her patience. In progress (2/11)
+
 
+
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=95299 95299] (Far far away)<br />
+
glexec tarball ticket. There's been a lot of communication with the glexec devs about this - the hopefully last hurdle is sorting out the RPATHs for the libraries. It's not a small hurdle though... On hold (2/11)
+
 
+
'''QMUL'''<br />
+
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=117151 117151](23/10)<br />
+
A ticket about jumbo frame problems, submitted to QM. After Dan provided some education the user replied, in that he only sees this problem at two atlas sites. But he is contacting the network admins at his institution to see if it is their end. On hold (29/10)
+
 
+
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=117011 117011] (19/10)<br />
+
ROD ticket for glue-validate errors. Went away for a while after Dan re-yaimed his site bdii, but possibly back again. Daniela suggests re-running the glue-validate test. Reopened (2/11)
+
 
+
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116689 116689] (6/10)<br />
+
Another ROD ticket, where Ops glexec test jobs are seemingly timing out for QM (this is the ticket Daniela mentioned on the ops mailing list). Dan noted that with the cluster half full tests were passing, suggesting some kind of load correlation (but as he also notes - what's getting loaded and causing the problem - Batch, CE or WNs?). Kashif reckons the argus server, and suggests a handy glexec time test which he posted. In progress (2/11)
+
 
+
'''BRUNEL'''<br />
+
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=117324 117324] (2/11)<br />
+
A fresh looking ROD ticket - Raul had to restart the BDII and hopefully that got it. In progress (2/11)
+
 
+
'''100IT'''<br />
+
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116358 116358] (22/9)<br />
+
Missing Image at 100IT. 100IT have asked for more details, no news since. Waiting for reply (19/10)
+
 
+
'''THE TIER 1'''<br />
+
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116866 116866] (12/10)<br />
+
Snoplus pilot enablement (not actually a word) at the Tier 1. New accounts were being requested after some internal discussion. On hold (19/10)
+
 
+
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=116864 116864] (12/10)<br />
+
CMS AAA tests failing (the submitter notes "again..."). There are some oddities with other sites, which might be remote problems, but Andrew notes that previous manual fixes have been overwritten which likely explains why problems came back. In progress (does it need to be waiting for a reply?) (26/10)
+
 
+
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=117171 117171] (24/10)<br />
+
LHCB had problems with an arc CE that was misbehaving for everyone. Things were fixed, and this ticket can now be closed. Waiting for reply (can be closed) (27/10)
+
 
+
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=117277 117277] (30/10)<br />
+
Atlas have spotted "bring online timeout has been exceeded). This appears to be a mixture of problems adding up, such as a number of borken disk nodes and heavy write access by atlas. In progress (2/11)
+
 
+
[https://ggus.eu/index.php?mode=ticket_info&ticket_id=117248 117248] (28/10)<br />
+
I believe related to the discussion on tb-support, this ticket requests that new SRM host certs that meet the requirements specified be requested for the RAL SRMs. Jens was on it, and the new certs are ready to be deployed. In progress (30/10)
+
 
+
[https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?host=all&servicestatustypes=16&hoststatustypes=15 '''Other VO Nagios'''] - some badness at Sussex, but they have a ticket open for that.
+
  
 
<!-- ******************Edit stop********************* ----->
 
<!-- ******************Edit stop********************* ----->
Line 565: Line 517:
 
===== =====  
 
===== =====  
 
<!-- ******************Edit start********************* ----->
 
<!-- ******************Edit start********************* ----->
''' Tuesday 6 Oct 2015'''
+
'''Monday 20th November'''
 
+
* The WLCG dashboard is available here: http://dashboard.cern.ch/.  
Moved Gridppnagios instance back to Oxford from Lancaster. It was kind of double whammy as both sites went down together. Fortunately Oxford site was partially working so we managed to start SAM Nagios at Oxford. Sam tests were unavailable for few hours but no affect on egi availibilty/reliability. Sites can have a look at https://mon.egi.eu/myegi/ss/ for a/r status.
+
* Please review the results on [http://pprc.qmul.ac.uk/~lloyd/gridpp/ Steve's test pages].
 
+
'''Tuesday 29 Sep 2015'''
+
 
+
Following an air-conditioning problem at machine room in Oxford Tier-2 site on 26 September, gridppnagios(OX) was shut down and gridppnagios(Lancs) became active instance. Oxford site is in downtime until 1st Oct and it may be extended depending on the situation.
+
VO-Nagios was also unavailable for two days but we have started it yesterday as it is running on a VM. VO-nagios is using oxford SE for replication test so it is failing those tests. I am looking to change to some other SE.    
+
  
'''Tuesday 09 June 2015'''
+
'''Tuesday 18th July'''
*ARC CEs were failing nagios test becuase of non-availability of egi repository. Nagios test compare CA version from EGI repo. It started on 5th June and one of the IP addresses behind webserver was not responding. Problem went away in approximately  3 hours. The same problem started again on 6th June. Finally it was fixed on 8th June. No reason was given in any of the ticket opened regarding this outage.
+
* Following our ops discussion last week, Steve will focus [http://pprc.qmul.ac.uk/~lloyd/gridpp/ his tests] on supporting the GridPP DIRAC area and decommission the other tests.
  
'''Tuesday 17th February'''
 
* Another period where message brokers were temporarily unavailable seen yesterday. Any news on the last follow-up?
 
 
'''Tuesday 27th January'''
 
* Unscheduled outage of the EGI message broker (GRNET) caused a short-lived disruption to GridPP site monitoring (jobs failed) last Thursday 22nd January. Suspect BDII caching meant no immediate failover to stomp://mq.cro-ngi.hr:6163/ from stomp://mq.afroditi.hellasgrid.gr:6163/
 
 
* [http://southgrid.blogspot.co.uk/2014/10/nagios-monitoring-for-non-lhc-vos.html Blog about VO Nagios]
 
* [https://vo-nagios.physics.ox.ac.uk/nagios/ Oxford VO Nagios] currently monitoring gridpp, pheno, t2k.org, snoplus.snolab.ca, vo.southgrid.ac.uk.
 
  
  
Line 594: Line 533:
 
{| style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 1em 0 0 0.3em;"
 
{| style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 1em 0 0 0.3em;"
 
|-
 
|-
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: left; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-left: 0.4em; padding-top: 0.1em; padding-bottom: 0.1em;" | VOs - [https://voms.gridpp.ac.uk:8443/vomses/ GridPP VOMS] [http://operations-portal.egi.eu/vo VO IDs] [https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs Approved] [http://pprc.qmul.ac.uk/~walker/votable.html VO table]
+
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: left; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-left: 0.4em; padding-top: 0.1em; padding-bottom: 0.1em;" | VOs - [https://voms.gridpp.ac.uk:8443/vomses/ GridPP VOMS] [http://operations-portal.egi.eu/vo VO IDs] [https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs Approved] [http://pprc.qmul.ac.uk/~lloyd/gridpp/votable.html VO table]
 
|-
 
|-
 
| style="padding: 0.4em 0.4em 0.4em 0.4em;" |
 
| style="padding: 0.4em 0.4em 0.4em 0.4em;" |
Line 600: Line 539:
 
===== =====
 
===== =====
 
<!-- ******************Edit start********************* ----->
 
<!-- ******************Edit start********************* ----->
'''Tuesday 19th May'''
 
* There is a current priority for enabling/supporting our joining communities.
 
  
'''Tuesday 5th May'''
+
'''Monday 20th November'''
* We have a number of VOs to be removed. Dedicated follow-up meeting proposed.
+
* Tom Whyntie has requested (and been granted) access to the GridPP VO to get some pipelines working for large-scale processing and analysis of MRI scans associated with the [http://www.ukbiobank.ac.uk/ UK Biobank project].
 +
* All VOs in the incubation page being prompted for updates by the end of November (required input for OC documents).
 +
* QMUL (Steve L) is following up on the biomed MoU. GridPP want to be cited in research papers for the support our resources/sites provide.
  
'''Tuesday 28th April'''
 
* For SNOPLUS.SNOLAB.CA, the port numbers for voms02.gridpp.ac.uk  and voms03.gridpp.ac.uk  have both been updated from 15003 to 15503.
 
 
'''Tuesday 31st March'''
 
* LIGO are in need of additional support for debugging some tests.
 
* LSST now enabled on 3 sites. No 'own' CVMFS yet.
 
  
 
* Impact
 
* Impact
Line 629: Line 562:
 
===== =====
 
===== =====
 
<!-- ******************Edit start********************* ----->
 
<!-- ******************Edit start********************* ----->
'''Tuesday 24th February'''
+
'''Date'''
* Next review of status today.
+
 
+
'''Tuesday 27th January'''
+
* Squids not in GOCDB for: UCL; ECDF; Birmingham; Durham; RHUL; IC; Sussex; Lancaster
+
* Squids in GOCDB for: EFDA-JET; Manchester; Liverpool; Cambridge; Sheffield; Bristol; Brunel; QMUL; T1; Oxford; Glasgow; RALPPD.
+
 
+
'''Tuesday 2nd December'''
+
* [https://www.gridpp.ac.uk/wiki/Batch_system_status Multicore status]. Queues available (63%)
+
** YES: RAL T1; Brunel; Imperial; QMUL; Lancaster; Liverpool; Manchester; Glasgow; Cambridge; Oxford; RALPP; Sussex (12)
+
** NO: RHUL (testing); UCL; Sheffield (testing); Durham; ECDF (testing); Birmingham; Bristol (7)
+
 
+
* According to our [https://www.gridpp.ac.uk/wiki/Batch_system_status table] for cloud/VMs (26%)
+
** YES: RAL T1; Brunel; Imperial; Manchester; Oxford (5)
+
** NO: QMUL; RHUL; UCL; Lancaster; Liverpool; Sheffield; Durham; ECDF; Glasgow; Birmingham; Bristol; Cambridge; RALPP; Sussex (14)
+
 
+
* [http://www.gridpp.ac.uk/php/gridpp-dirac-sam.php?action=viewlcg GridPP DIRAC jobs] successful  (58%)
+
** YES: Bristol; Glasgow; Lancaster; Liverpool; Manchester; Oxford; Sheffield; Brunel; IC; QMUL; RHUL (11)
+
** NO: Cambridge; Durham; RALPP; RAL T1 (4) + ECDF; Sussex; UCL; Birmingham (4)
+
 
+
* [https://www.gridpp.ac.uk/wiki/IPv6_site_status IPv6 status]
+
** Allocation - 42%
+
** YES: RAL T1; Brunel; IC; QMUL; Manchester; Sheffield; Cambridge; Oxford (8)
+
** NO: RHUL; UCL; Lancaster; Liverpool; Durham; ECDF; Glasgow; Birmingham; Bristol; RALPP; Sussex
+
 
+
* Dual stack nodes - 21%
+
** YES: Brunel; IC; QMUL; Oxford (4)
+
** NO: RHUL; UCL; Lancaster; Glasgow; Liverpool; Manchester; Sheffield; Durham; ECDF; Birmingham; Bristol; Cambridge; RALPP; Sussex, RAL T1 (15)
+
 
+
 
+
 
+
'''Tuesday 21st October'''
+
* High loads seen in xroot by several sites: Liverpool and RALT1... and also Bristol (see Luke's TB-S email on 16/10 for questions about changes to help).
+
  
'''Tuesday 9th September'''
 
* Intel announced the new generation of Xeon based on Haswell.
 
  
  

Latest revision as of 14:09, 30 October 2019

Bulletin archive


Week commencing Monday 25th February 2019
Task Areas
General updates

Tuesday 18th June


Tuesday 11th June

Tuesday 4th June 2019


WLCG Operations Coordination - AgendasWiki Page

Tuesday 11th June

  • I got stuck (figuratively) in my machine room last Thursday afternoon so missed it. Agenda. Minutes. Ste was there - any observations?

Tuesday 14th May

  • Next meeting this Thursday.

Tuesday 9th April


Tuesday 26th March

Monday 11th February 2019

  • There was a WLCG ops coordination meeting last Thursday. You can view the meeting notes here.
    • End of CREAM support Dec 2020.
    • Migration from CREAM to be discussed at the EGI conference 6-8 of May in Amsterdam.
    • Operational intelligence discussion (see slides).
    • Experiment updates (see ATLAS discussion on DOMA)
    • WG updates (few).




Tier-1 - Status Page

17 June 2019 Report for the Experiments Liaison Report (17/06/2019) is here.

  • Ongoing, we are seeing high outbound packet loss over IPv6. Central networking performed a firmware update to the border routers but this didn’t resolve the issue. Plan to move connections to the new border routers in Mid June. Will do this before trying to debug any further.

11 June 2019 Report for the Experiments Liaison Report (10/06/2019) is here.

  • Ongoing, we are seeing high outbound packet loss over IPv6. Central networking performed a firmware update to the border routers but this didn’t resolve the issue. Plan to move connections to the new border routers in Mid June. Will do this before trying to debug any further.
  • Three certificates were revoked mistakenly on ARGUS on Thursday. All SAM tests failed until this was fixed the next morning. Batch farm also did not start any new jobs during this time. We used this accidental draining to reboot nodes that needed to pick up security patching.
  • LHCb Castor instance has been completely disabled for LHCb and will be decommissioned.
  • Brian Davies has transferred from his role as GridPP Tier-2 Storage Support Officer and has joined the Tier-1 Production Team. Although this has happened with immediate effect he will still be available for ad-hoc/informal storage support.
Storage & Data Management - Agendas/Minutes

Wed 30 Oct

  • DOME upgrade problems at Edinburgh
  • Data management support/development for IRIS users

Wed 23 Oct

  • Rucio reporting

Wed 16 Oct

  • CEPH workshop at CERN report

Wed 02 Oct

  • Safe to upgrade to DPM 1.13 but make sure the BDII is working if you support DIRAC
  • Roadmap for xroot and http TPC for RAL FTS(es)

Wed 25 Sept

  • Storage support for IRIS VOs?

Wed 18 Sept

  • Report from yesterday's Rucio Face Meeting at Coseners
  • Suggestions for following up from yesterday's CEPH day hosted by CERN

Wed 11 Sept

  • Storage related stuff at the FNAL (pre-)GDBs
  • DOME upgrade tickets for non-DOME DPM sites

Wed 04 Sept

  • Banning in SSC not entirely successful in non-DOME DPM, and end of support is nigh; tickets to upgrade will go out shortly.
  • Storage-and-data-management-wise, GridPP43 was interesting although no-one volunteered to install the next CEPH.


Tier-2 Evolution - GridPP JIRA

Tuesday 30 Apr 2019

  • ATLAS CernVM4 VMs (equivalent to CentOS 7.6) being tested


Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06

Tuesday 6th February


Tuesday 30th January

Tuesday 24th Oct


Monday 16th January

  • The discussion topic for next week will be accounting comparisons. Please note Alessandra's comments last week.

Monday 14th November

  • Alessandra has written an FAQ to extract numbers from ATLAS and APEL avoiding the SSB.

Monday 26th September

  • A problem with the APEL Pub and Sync tests developed last Tuesday and was resolved on Wednesday. This had a temporary impact on the accounting portal.
Documentation - KeyDocs

Tue 9th July 2019

LHCb has added this to their requirements:

Sites not having an SRM installation must provide:

Tue 16th April 2019

Minor change to LHCB requirements in Approved VOs:

  • Sites having migrated to the Centos7 (including "Cern Centos7") operating system or later versions are requested to provide support for singularity containers.

https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs#VO_Resource_Requirements

Tue 2nd April 2019

Changes to Approved for DUNE (and LZ)

There is a new "Approved VOs" document showing new settings for DUNE. And, since both LZ's voms servers now show up in the EGI Portal, I've removed the (now spurious) entry for voms.hep.wisc.edu that was formerly being inserted by hand.

https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs

I've also created a new set of RPMs (each has all LSC, VOMS, and XML for one VO) for all approved VOs. See Approved VOs doc for details, version is 1.9.

Sites with DUNE services need to update using whatever method they employ.

Special note: Since voms1.fnal.gov changed yesterday, I've updated documentation for that right away. But voms2.fnal.gov remains as it is until 23rd April (3 weeks). I'll update documentation for that when it is closer. Staggering the updates like this gives a time margin, so any particular site can have at least one service properly configured at any time. But it does mean that two site updates are needed; one now, and one in three weeks. However, since it's sufficient for only one (of the two) voms servers to be configured properly, sites could save effort and wait until (say) 19th April, then update both at once. But I can't dictate what site admins should do. It's your call.

Ste


Tuesday 5th Mar 2019 New VOMs details for Biomed: https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs

New VOMs rpms to match.

http://hep.ph.liv.ac.uk/~sjones/RPMS.voms/

Tuesday 12th Feb 2019 Documentation done for HTCondor-CE apel accounting. https://twiki.cern.ch/twiki/bin/view/LCG/HtCondorCeAccounting


Tuesday 12th Feb 2019 New CA DN for Biomed

< VOMS_CA_DN="'/C=FR/O=CNRS/CN=GRID2-FR' "
---
> VOMS_CA_DN="'/C=FR/O=MENESR/OU=GRID-FR/CN=AC GRID-FR Services' "

Tuesday 5th Feb 2019 For enmr, certificate of voms-02.pd.infn.it

New DN: /DC=org/DC=terena/DC=tcs/C=IT/L=Frascati/O=Istituto Nazionale di Fisica Nucleare/CN=voms-02.pd.infn.it, New CA_DN: /C=NL/ST=Noord-Holland/L=Amsterdam/O=TERENA/CN=TERENA eScience SSL CA 3

Please check approved VOs: https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs


General note

See the worst KeyDocs list for documents needing review now and the names of the responsible people.

Interoperation - EGI ops agendas Indico schedule

14 May 2019

  • Meeting for May cancelled. Next meeting is on 10th June.

Monday 11th March


Tuesday 12th February

  • There was an EGI ops meeting yesterday - Agenda. We were asked to review the IPv6 readiness information (see here). We should perhaps link in the GridPP IPv6 table for updates status.


Friday 1st February

Thursday 17th January EGI OMB meeting

  • CREAMCE to be out of support by Dec 2020
  • More effort to fix accounting issue of HTCondor CE

Monday 14th January 2019

  • No UK specific issue mentioned.
  • IPv6 readiness plan is going to be summarized at OMB.

'


Monitoring - Links MyWLCG

Tuesday 4th July

  • There were a number of useful links provided in the monitoring talks at the WLCG workshop in Manchester - especially those in the Wednesday sessions.

Monday 13th February

  • This category is pretty much inactive. Are there any topics under "monitoring" that anyone wants reported at this ops meeting? If not we will remove this section from the regular updates area of the bulletin and just leave the main links.

Tuesday 1st December


Tuesday 16th June

  • F Melaccio & D Crooks decided to add a FAQs section devoted to common monitoring issues under the monitoring page.
  • Feedback welcome.


On-duty - Dashboard ROD rota

Tuesday 5th February

  • Birmingham decommissioning of SRM and BDII still going on so tickets are on hold.
  • Few availability tickets on hold
  • Lancaster has WebDav ticket on hold which seems to be effect of DOME rollout

Tuesday 14th August

  • A couple of new availability tickets (QMUL and Lancaster), both for well-publicised reasons. Otherwise quiet. AM on shift this week.


Rollout Status WLCG Baseline

Monday 20th November



Historical References


Security - Incident Procedure Policies Rota

Tuesday 11th June

  • NTR


Services - PerfSonar production dashboard |PerfSonar ETF | OSG Networking and perfSONAR pages | GridPP VOMS

- This includes notifying of (inter)national services that will have an outage in the coming weeks or will be impacted by work elsewhere. (Cross-check the Tier-1 update).

Monday 5th March

Monday 10th September

Monday 2nd July

Monday 30th April

Monday 19th February

Please could sites upgrade their perfsonar hosts to CentOS7. Instructions are here. Current OS versions here.

  • Duncan has recreated the UK perfSONAR mesh. Link here!


Tickets

32 Open Tickets this week, which is an in depth a look as I've been able to take.

Tools - MyEGI Nagios

Monday 20th November

Tuesday 18th July

  • Following our ops discussion last week, Steve will focus his tests on supporting the GridPP DIRAC area and decommission the other tests.


VOs - GridPP VOMS VO IDs Approved VO table

Monday 20th November

  • Tom Whyntie has requested (and been granted) access to the GridPP VO to get some pipelines working for large-scale processing and analysis of MRI scans associated with the UK Biobank project.
  • All VOs in the incubation page being prompted for updates by the end of November (required input for OC documents).
  • QMUL (Steve L) is following up on the biomed MoU. GridPP want to be cited in research papers for the support our resources/sites provide.


Site Updates

Date



Meeting Summaries
Project Management Board - MembersMinutes Quarterly Reports

Empty

GridPP ops meeting - Agendas Actions Core Tasks

Empty


RAL Tier-1 Experiment Liaison Meeting (Wednesday 13:30) Agenda Meeting takes place on Vidyo.

Highlights from this meeting are now included in the Tier1 report farther up this page.

WLCG Grid Deployment Board - Agendas MB agendas

Empty



NGI UK - Homepage CA

Empty

Events
UK ATLAS - Shifter view News & Links

Atlas S&C week 2-6 Feb 2015

Production

• Prodsys-2 in production since Dec 1st

• Deployment has not been transparent , many issued has been solved, the grid is filled again

• MC15 is expected to start soon, waiting for physics validations, evgen testing is underway and close to finalised.. Simulation expected to be broadly similar to MC14, no blockers expected.

Rucio

• Rucio in production since Dec 1st and is ready for LHC RUN-2. Some fields need improvements, including transfer and deletion agents, documentation and monitoring.

Rucio dumps available.

Dark data cleaning

files declaration . Only Only DDM ops can issue lost files declaration for now, cloud support needs to fill a ticket.

• Webdav panda functional tests with Hammercloud are ongoing

Monitoring

Main page

DDM Accounting

space

Deletion

ASAP

• ASAP (ATLAS Site Availability Performance) in place. Every 3 months the T2s sites performing BELOW 80% are reported to the International Computing Board.


UK CMS

Empty

UK LHCb

Empty

UK OTHER
  • N/A
To note

  • N/A