|
|
Line 130: |
Line 130: |
| <!-- *********************************************************** -----> | | <!-- *********************************************************** -----> |
| <!-- ***********************Start T1 text*********************** -----> | | <!-- ***********************Start T1 text*********************** -----> |
− | '''Tuesday 22nd September''' | + | '''Tuesday 29th September''' |
− | A reminder that there is a weekly [http://www.gridpp.ac.uk/wiki/RAL_Tier1_Experiments_Liaison_Meeting Tier-1 experiment liaison meeting]. Notes from the last meeting [https://www.gridpp.ac.uk/wiki/Tier1_Operations_Report_2015-09-16 here] | + | A reminder that there is a weekly [http://www.gridpp.ac.uk/wiki/RAL_Tier1_Experiments_Liaison_Meeting Tier-1 experiment liaison meeting]. Notes from the last meeting [https://www.gridpp.ac.uk/wiki/Tier1_Operations_Report_2015-09-23 here] |
− | * The production FTS server has been under high load and has been showing performance problems. Towards the end of last week Atlas moved work across to our Test FTS3 instance. The production FTS was updated (to version 3.3.1) on Wed. morning (16th). However, that seems to have introduced a memory leak - which is being followed up with the developers. (A fix has just been received and is being applied this morning.) | + | * The problems with the production FTS service have been resolved. A workaround to the memory leak introduced with the |
− | * The first step of the update of the Oracle databases behind Castor was made on Tuesday 15th. There are further steps to do - as announced in the GOC DB.
| + | new version has been supplied. This, along with a reduction in the numbers of transfers queued, has enabled the service to return to |
− | * There will be an 'at risk' on the morning of Wednesday 30th Sep. as the Tier1's link into the RAL core network is upgraded to 40Gb. | + | normal operation. |
| + | * - The second step in the upgrade of the Castor Oracle databases to version 11.2.0.4 took place last Tuesday. This was the upgrade of |
| + | the "Neptune" standby database and the re-establishment of the Dataguard link. ("Neptune" hosts the Atlas and GEN instance |
| + | stagers.). The next step in this upgrade is the upgrade of the "Pluto" database which hosts the Nameserver as well as the CMS & LHCb |
| + | stager databases. This will require all of Castor to be down for the day and is scheduled for the 6th October. |
| + | * We have announced an 'at risk' for tomorrow morning (Wednesday 30th Sep.) as the Tier1's link into the RAL core network is upgraded to 40Gb. This will take place between 07:00 and 08:30. |
| <!-- **********************End T1 text************************** -----> | | <!-- **********************End T1 text************************** -----> |
| <!-- *********************************************************** -----> | | <!-- *********************************************************** -----> |
General updates
|
Tuesday 29th September
- Nagios was (and therefore the regional dashboard has) affected by a weekend A/C outage at Oxford.
- Steve J reports on: Condor libglobus_common problems
- There was an EGI OMB on 24th September. Agenda.
- Notes from the Monday biweekly WLCG ops meeting are available for anyone who is interested in the latest ops news.
- On the topic 'Perfsonar Bandwidth checks not running' Duncan reported a move to a full WLCG mesh.
- Tom would appreciate feedback on the GridPP website v2.
- Steve Lloyd has setup a metrics page as a basis for allocating T2 hardware funding. This just uses total Disk and total Elapsed and/or CPU time. In the PMB yesterday it was agreed that Elapsed time would be used, but the results of various combinations will be watched and assessed over the coming months. One overriding reason for using Elapsed time is that CPU is not provided by all cloud implementations.
-
Tuesday 22nd September
- Monday's WLCG weekly ops meeting minutes are available.
- There is an EGI Operations Management Board this Thursday. Do we have any items to raise?
- Several observations recently of FTS3 at RAL being overloaded.
- Federico raised: anomalous CPU usage for DIRAC ilc jobs.
- Looking at supporting DEAP3600 (RHUL, RAL and Sussex).
Tuesday 15th September
Tuesday 1st September
- From October/November, the EGI ops VO monitoring will be performed using RFC proxies, as opposed to legacy proxies.
- There will be a new filter for the critical profile for ATLAS WLCG SAM tests so that only production endpoints will be tested and taken into account for site availability metrics. This will be available from the SAM3 interface.
- CNAF: Due to a fire causing problems with one electrical supply line that happened last Thursday, the computing centre is running at lower capacity (around 30% less of the pledged capacity).
- Machine job features testing has hit a big according to Raul!
- The HNSciCloud PCP pilot proposal successful submission (refer to Andrew Sansum's talk at GridPP34 if you forget what this means!). The project intends to procure commercial cloud resources for FY17 and FY18. We will contribute 75K euro towards this activity and the EU will then top up to 250K euro.
- There was an EGI Operations Management Board last Thursday. There are no summary notes yet, but please take a look at the agenda and linked talks (may be worth skimming them at the ops meeting).
- There was a quick request/reminder for sites to please update their IPv6 entries in the GridPP wiki.
- RAL is closed on Monday and Tuesday of this week!
Tuesday 24th August
|
WLCG Operations Coordination - Agendas
|
Tuesday 22nd September
- There was an ops coordination meeting last Thursday: Minutes.
- Highlights:
- All 4 experiments have now an agreed workflow with the T0 for tickets that should be handled by the experiment supporters and were accidentally assigned to the T0 service managers.
- A new FTS3 bug fixing release 3.3.1 is now available.
- A globus lib issue is causing problems with FTS3 for sites running IPv6.
- The rogue Glasgow configuration management tool replacing the current configuration for VOMS with the old one was picked up and unfortunately discussed as though sites had not got the message about using the new VOMS.
- No network problems experienced with the transatlantic link despite 3 out of 4 cables being unavailable.
- T0 experts are investigating the slow WN performance reported by LHCb and others.
- A group of experts at CERN and CMS investigate ARGUS authentication problems affecting CMS VOBOXes.
- T1 & T2 sites please observe the actions requested by ATLAS and CMS (also on the WLCG Operations portal).
- Actions for Sites; Experiments.
Tuesday 15th September
- There was a middleware readiness meeting on 16th September.
- The new DPM version is being tested via the ATLAS workflow by the Edinburgh Volunteer site.
- Many new sites showed interest to participate in MW Readiness testing with CentOS7. It is useful to anticipate the MW behaviour in the event of new HW purchase. DPM validation on CentOS/SL7 is already ongoing at Glasgow.
- ATLAS and CMS are asked to declare whether the xrootd 4 monitoring plugin is important for them or not. As it is now, it doesn't work with dCache v. 2.13.8
- Despite the fact that FTS3 runs at very few sites we decided to test it for Readiness. In this context, ATLAS and CMS are asked to use the FTS3 pilot in their transfer test workflows
- PIC successfully tested dCache v.2.13.8 for CMS.
- CNAF has obtained Indigo-DataCloud effort to strengthen the ARGUS development team. The ARGUS collaboration will meet again early October. The problems faced at CERN with a CMS VOBOX are being investigated in ticket GGUS:116092.
- The next MW Readiness WG vidyo meeting will take place on Wednesday 28 October at 4pm CET.
|
Tier-1 - Status Page
|
Tuesday 29th September
A reminder that there is a weekly Tier-1 experiment liaison meeting. Notes from the last meeting here
- The problems with the production FTS service have been resolved. A workaround to the memory leak introduced with the
new version has been supplied. This, along with a reduction in the numbers of transfers queued, has enabled the service to return to
normal operation.
- - The second step in the upgrade of the Castor Oracle databases to version 11.2.0.4 took place last Tuesday. This was the upgrade of
the "Neptune" standby database and the re-establishment of the Dataguard link. ("Neptune" hosts the Atlas and GEN instance
stagers.). The next step in this upgrade is the upgrade of the "Pluto" database which hosts the Nameserver as well as the CMS & LHCb
stager databases. This will require all of Castor to be down for the day and is scheduled for the 6th October.
- We have announced an 'at risk' for tomorrow morning (Wednesday 30th Sep.) as the Tier1's link into the RAL core network is upgraded to 40Gb. This will take place between 07:00 and 08:30.
|
Storage & Data Management - Agendas/Minutes
|
Wednesday 02 Sep
- Catch up with MICE
- How to do transfers of lots of files with FTS3 without the proxy timing out (in particularly if you need it vomsified)
Wednesday 12 Aug
- sort of housekeeping: data cleanups, catalogue synchronisation - in particular namespace dumps for VOs
- GridPP storage/data at future events; GridPP35 and Hepix and Cloud data events
Wednesday 08 July
- Huge backlog of ATLAS data from Glasgow waiting to go to RAL, and oddly varying performance numbers - investigating
- How physics data is like your Windows 95 games
Wednesday 01 July
- Feedback on CMS's proposal for listing contents of storage
- Simple storage on expensive raided disks vs complicated storage on el cheapo or archive drives?
Wednesday 24 June
- Heard about the Indigo datacloud project, a H2020 project in which STFC is participating
- Data transfers, theory and practice
- Somewhat clunky tools to set up but perform well when they run
- Will continue to work on recommendations/overview document
- Worth having recommendations/experiences for different audiences - (potential) users, decision makers, techies
|
Tier-2 Evolution - GridPP JIRA
|
Tuesday 22 Sep
Thursday 17 Sep
- Task force to start developing advice for sites to simplify their operation in line with "6.2.5 Evolution of Tier-2 sites" in the GridPP5 proposal.
- Mailing list for Tier-2 evolution activities: gridpp-t2evo@cern.ch - anyone welcome to join
- Also GridPP project on the CERN JIRA service for tracking actions. Can be used with a full or lightweight CERN account. You need to be added manually or on the gridpp-ops@cern.ch mailing list to browse issues.
|
Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06
|
Tuesday 22nd September
- Slight delay for Sheffield but overall okay - although there is a gap between today's date and the most recent update for all sites. Perhaps an APEL delay.
Monday 20th July
- Oxford publishing 0 cores from Cream today. Maybe they forgot to switch one off. Check here.
Tuesday 14th July
- QMUL and Sheffield appear to be lagging with publishing by a week.
- Please check your multicore publishing status (especially those sites mentioned in June).
|
Documentation - KeyDocs
|
Tuesday 22nd September
- Steve J is going to undertake some GridPP/documentation usability testing.
Tuesday 18th August
- Lydia's document - Setup a system to do data archiving using FTS3
Tuesday 28th July
- Ewan: /cvmfs/gridpp-vo help ... there's a lot of historical stuff on the GridPP wiki that makes it look a lot more complicated than it is now. We really should have a bit of a clear out at some point.
Tuesday 23rd June
- Reminder that documents need reviewing!
General note
See the worst KeyDocs list for documents needing review now and the names of the responsible people.
|
Interoperation - EGI ops agendas
|
Monday 13th July
- There was an EGI Ops meeting today: agenda
- URT/UMD updates:
- SR updates (small because it's summer):
- gfal2 2.9.1
- storm 1.11.9
- srm-ifce 1.23.1....
- gfal2-python 1.8.1
- In Verification
- gfal2-plugin-xrootd 0.3.4
- Accounting
- [John Gordon] "Of the WLCG sites we now have 97%+ of cpu reported with cores. I expect you all saw my recent email to GDB naming 16 sites. If one German and one Spanish site and the four Russians start publishing we will jump to 99%+"
- New list of sites needing to update multicore accounting being prepared this evening (Monday) by Vincenzo
- SL5 decommissioning date March 2016;
- Next meeting 10th August
Monday 15th June
- There was an EGI operations meeting today: agenda.
- New Action: for the NGIs: please start tracking which sites are still using SL5 services: how many services, and for each service if still needed on SL5, if upgrades on SL5 services are expected). A wiki has been provided to record updates. Also interesting to understand who is using Debian.
|
Monitoring - Links MyWLCG
|
Tuesday 16th June
- F Melaccio & D Crooks decided to add a FAQs section devoted to common monitoring issues under the monitoring page.
- Feedback welcome.
Tuesday 31st March
Monday 7th December
|
On-duty - Dashboard ROD rota
|
Tuesday 15th September
- Generally quiet. QMUL have some grumblyness with the CEs. However, I understand much of this is caused by the batch farm being busy. There are low-availability tickets 'on hold' for Liverpool and UCL.
Tuesday 1st September
- Gordon was on shift.
- Another very quiet week with no new tickets. We have two open ROD tickets, both of which are for A/R; one is against Cambridge, and the other is the now 53-day-old ticket at UCL.
- Next up.... Kashif again (thanks Kashif!)
Monday 24th August
- Kashif on shift.
- There were quite a few alarms throughout the week and many tickets were opened. All of the tickets were fixed within time limit.
- The certificate of the Argus server at sheffield expired but Elena got a new certificate quickly.
- Cambridge and UCL have low availibilty tickets and not much can be done about it except waiting for availibilty to reach 90%.
|
Rollout Status WLCG Baseline
|
Tuesday 15th September
Tuesday 12th May
- MW Readiness WG meeting Wed May 6th at 4pm. Attended by Raul, Matt, Sam and Jeremy.
Tuesday 17th March
- Daniela has updated the [ https://www.gridpp.ac.uk/wiki/Staged_rollout_emi3 EMI-3 testing table]. Please check it is correct for your site. We want a clear view of where we are contributing.
- There is a middleware readiness meeting this Wednesday. Would be good if a few site representatives joined.
- Machine job features solution testing. Fed back that we will only commence tests if more documentation made available. This stops the HTC solution until after CHEP. Is there interest in testing other batch systems? Raul mentioned SLURM. There is also SGE and Torque.
References
|
Security - Incident Procedure Policies Rota
|
Tuesday 29th September
- Incident broadcast EGI-20150925-01 relating to compromised systems in China.
- UK security team meeting scheduled for 30th Sept.
Monday 29th September
- IGTF has released a regular update to the trust anchor repository (1.68) - for distribution ON OR AFTER October 5th
Tuesday 15th September
- The security team meeting this week is cancelled. The next will be on 30th.
Tuesday 1st September
The EGI security dashboard.
|
|
Services - PerfSonar dashboard | GridPP VOMS
|
- This includes notifying of (inter)national services that will have an outage in the coming weeks or will be impacted by work elsewhere. (Cross-check the Tier-1 update).
Tuesday 18th August
Tuesday 14th July
- GridPP35 in September will have a part focus on networking and IPv6. This will include a review of where sites are with their deployment. Please try to firm up dates for your IPv6 availability between now and September. Please update the GridPP IPv6 status table.
|
Tools - MyEGI Nagios
|
Tuesday 09 June 2015
- ARC CEs were failing nagios test becuase of non-availability of egi repository. Nagios test compare CA version from EGI repo. It started on 5th June and one of the IP addresses behind webserver was not responding. Problem went away in approximately 3 hours. The same problem started again on 6th June. Finally it was fixed on 8th June. No reason was given in any of the ticket opened regarding this outage.
Tuesday 17th February
- Another period where message brokers were temporarily unavailable seen yesterday. Any news on the last follow-up?
Tuesday 27th January
- Unscheduled outage of the EGI message broker (GRNET) caused a short-lived disruption to GridPP site monitoring (jobs failed) last Thursday 22nd January. Suspect BDII caching meant no immediate failover to stomp://mq.cro-ngi.hr:6163/ from stomp://mq.afroditi.hellasgrid.gr:6163/
|
VOs - GridPP VOMS VO IDs Approved VO table
|
Tuesday 19th May
- There is a current priority for enabling/supporting our joining communities.
Tuesday 5th May
- We have a number of VOs to be removed. Dedicated follow-up meeting proposed.
Tuesday 28th April
- For SNOPLUS.SNOLAB.CA, the port numbers for voms02.gridpp.ac.uk and voms03.gridpp.ac.uk have both been updated from 15003 to 15503.
Tuesday 31st March
- LIGO are in need of additional support for debugging some tests.
- LSST now enabled on 3 sites. No 'own' CVMFS yet.
|
Site Updates
|
Tuesday 24th February
- Next review of status today.
Tuesday 27th January
- Squids not in GOCDB for: UCL; ECDF; Birmingham; Durham; RHUL; IC; Sussex; Lancaster
- Squids in GOCDB for: EFDA-JET; Manchester; Liverpool; Cambridge; Sheffield; Bristol; Brunel; QMUL; T1; Oxford; Glasgow; RALPPD.
Tuesday 2nd December
- Multicore status. Queues available (63%)
- YES: RAL T1; Brunel; Imperial; QMUL; Lancaster; Liverpool; Manchester; Glasgow; Cambridge; Oxford; RALPP; Sussex (12)
- NO: RHUL (testing); UCL; Sheffield (testing); Durham; ECDF (testing); Birmingham; Bristol (7)
- According to our table for cloud/VMs (26%)
- YES: RAL T1; Brunel; Imperial; Manchester; Oxford (5)
- NO: QMUL; RHUL; UCL; Lancaster; Liverpool; Sheffield; Durham; ECDF; Glasgow; Birmingham; Bristol; Cambridge; RALPP; Sussex (14)
- GridPP DIRAC jobs successful (58%)
- YES: Bristol; Glasgow; Lancaster; Liverpool; Manchester; Oxford; Sheffield; Brunel; IC; QMUL; RHUL (11)
- NO: Cambridge; Durham; RALPP; RAL T1 (4) + ECDF; Sussex; UCL; Birmingham (4)
- IPv6 status
- Allocation - 42%
- YES: RAL T1; Brunel; IC; QMUL; Manchester; Sheffield; Cambridge; Oxford (8)
- NO: RHUL; UCL; Lancaster; Liverpool; Durham; ECDF; Glasgow; Birmingham; Bristol; RALPP; Sussex
- Dual stack nodes - 21%
- YES: Brunel; IC; QMUL; Oxford (4)
- NO: RHUL; UCL; Lancaster; Glasgow; Liverpool; Manchester; Sheffield; Durham; ECDF; Birmingham; Bristol; Cambridge; RALPP; Sussex, RAL T1 (15)
Tuesday 21st October
- High loads seen in xroot by several sites: Liverpool and RALT1... and also Bristol (see Luke's TB-S email on 16/10 for questions about changes to help).
Tuesday 9th September
- Intel announced the new generation of Xeon based on Haswell.
|
|