General updates
|
Tuesday 24th November
- Some ATLAS sites (as of Monday) still had PRODDISK listed in the BDII.
- An agenda for December's GDB is developing - see here. Tier-2 representation has wained recently. Perhaps someone attending the DPM meeting could attend the GDB too?
-
Tuesday 17th November
Tuesday 10th November
- There was a WLCG GDB last week: Agenda. Minutes.
- ATLAS is going to move the brokering to use maxrss value rather than maxrss+maxswap (called maxmemory in the ATLAS panda queues). Reminder notes.
- There is an EGI meeting in Bari this week. The OMB agenda/slides can be seen here.
- An update from John Gordon on the CPU efficiency accounting discussion of last week."Stuart has the database merge under way now. We had hoped it would be done by the end of October but it isn’t complete yet. When done and we send an integrated dataset to the portal they will put the current dev view in the production portal alongside the current one. They are currently developing a major rewrite of the portal and don’t want to mess about with it too much."
- J Perkin: Weird user error when reading LFC. Led to a reminder that the CA team are currently best efforts - please be patient.
- J Hill: ATLAS consistency checks at Tier2s requires sites to upload stiorage dumps - please can we have a clear statement of what parameters ATLAS wants us to use for these dumps?
- Simon G: Asked about Tier3 access to Tier2 storage.
- LCG-ROLLOUT: TOP BDII issues with CentOS 6.7 (openldap-servers-2.4.40-5.el6.x86_64). It basically breaks but is being followed up with RH.
- For those interested in the ARGUS working group meeting last week please see their summary.
- There was a GridPP Technical Meeting last Friday: Agenda.
|
WLCG Operations Coordination - AgendasWiki Page
|
Tuesday 17th November
Tuesday 10th November
- There was a WLCG ops coordination meeting last Thursday: Minutes. (Alessandra chaired and might be able to talk over the main items at our ops meeting).
Tuesday 27th October
- 13th MW Readiness WG meeting THIS Wed 28/10 @ 4pm CET in CERN room 28-S-023 or via vidyo.
- There was a WLCG ops coordination meeting last Thursday: Agenda - Minutes.
Next Meeting is scheduled for Thursday 22nd October
Tuesday 6th October
- There was a WLCG ops coordination meeting last week. Minutes. Agenda (which has John Gordon's accounting slides).
- The highlights:
- dCache sites should install the latest fix for SRM solving a vulnerability
- All sites hosting a regional or local site xrootd should updgrade it at least to version 4.1.1
- CMS DPM sites should consider upgrading dpm-xrootd to version 3.5.5 now (from epel-testing) or after mid October (from epel-stable) to fix a problem affecting AAA
- Tier-1 sites should do their best to avoid scheduling OUTAGE downtimes at the same time as other Tier-1's supporting common LHC VOs. A calendar will be linked in the minutes of the 3 o'clock operations meeting to easily find out if there are already downtimes at a given date
- The multicore accounting for WLCG is now correct for the 99.5% of the CPU time, with the few remaining issues being addressed. Corrected historical accounting data is expected to be available from the production portal by the end of the month
- All LHCb sites will soon be asked to deploy the "machine features" functionality
Tuesday 22nd September
- There was an ops coordination meeting last Thursday: Minutes.
- Highlights:
- All 4 experiments have now an agreed workflow with the T0 for tickets that should be handled by the experiment supporters and were accidentally assigned to the T0 service managers.
- A new FTS3 bug fixing release 3.3.1 is now available.
- A globus lib issue is causing problems with FTS3 for sites running IPv6.
- The rogue Glasgow configuration management tool replacing the current configuration for VOMS with the old one was picked up and unfortunately discussed as though sites had not got the message about using the new VOMS.
- No network problems experienced with the transatlantic link despite 3 out of 4 cables being unavailable.
- T0 experts are investigating the slow WN performance reported by LHCb and others.
- A group of experts at CERN and CMS investigate ARGUS authentication problems affecting CMS VOBOXes.
- T1 & T2 sites please observe the actions requested by ATLAS and CMS (also on the WLCG Operations portal).
- Actions for Sites; Experiments.
Tuesday 15th September
|
Tier-1 - Status Page
|
Tuesday 24th November
A reminder that there is a weekly Tier-1 experiment liaison meeting. Notes from the last meeting here
- We are investigating why LHCB batch jobs sometimes fail to write results back to Castor (and the sometimes fail to write remotely as well).
- We are continuing with some detailed network changes needed to remove our old core switch from the network. On Wednesday morning (18th) the link to the Atlas building (R26) was successfully moved. (There was a site 'warning' in the GOC DB for this.)
- A week or so ago we saw some Atlas Hammercloud tests failing (loss of heartbeat) - although the problem seems to have gone away now. This is not understood yet.
- We have found a problem on some disk servers of one particular batch that have been updated to SL6. The servers can run slowly and individual commands hang (until a timeout) while making name look-ups. So far a total of three servers have been affected spread over a couple of weeks. We can easily fix the problem but do not yet know why it occurs.
- The tenders for the next round of Disk and CPU purchases are now out.
- All of the Tier1's Castor tape servers are now running Castor version 2.1.15. (The rest of Castor is still at 2.1.14 - with the aim of upgrading early-ish in 2016).
|
Storage & Data Management - Agendas/Minutes
|
Wednesday 28 Oct
- Summary of "UK T0" workshop - GridPP well represented
- Sites should not upgrade to DPM 1.8.10 just yet
Wednesday 02 Sep
- Catch up with MICE
- How to do transfers of lots of files with FTS3 without the proxy timing out (in particularly if you need it vomsified)
Wednesday 12 Aug
- sort of housekeeping: data cleanups, catalogue synchronisation - in particular namespace dumps for VOs
- GridPP storage/data at future events; GridPP35 and Hepix and Cloud data events
Wednesday 08 July
- Huge backlog of ATLAS data from Glasgow waiting to go to RAL, and oddly varying performance numbers - investigating
- How physics data is like your Windows 95 games
Wednesday 01 July
- Feedback on CMS's proposal for listing contents of storage
- Simple storage on expensive raided disks vs complicated storage on el cheapo or archive drives?
|
Tier-2 Evolution - GridPP JIRA
|
Tuesday 17 Nov
- New depo.gridpp.ac.uk service for uploading files to via HTTPS
- ATLAS VMs now upload log files to depo.gridpp.ac.uk for debugging (GRIDPP-24)
Monday 9 Nov
- Vac 0.19.0 released last week: new VacQuery protocol
- Prototyping Vcycle(/Vac) GLUE2 publishing: talk at WLCG InfoSys Evolution TF on Thursday
- Please now use https://repo.gridpp.ac.uk/vacproject/ URLs for user_data and cernvm3.iso files (in preparation for move to new www.gridpp.ac.uk website.)
- See Laurence's GDB talk "Helix Nebula Price Enquiry Results" for more use of Vcycle
Tuesday 3 Nov
- LHCb prototype of GOCDB pointers to resource BDII done
- T2C tests at Oxford ongoing
Tuesday 20 Oct
- LHCb multipilot VMs now in production
- Support for APEL-Sync records in Vac, but need to co-ordinate with APEL team to validate it. This is to allow pure-VM sites like UCL to pass APEL SAM tests (GRIDPP-10)
- Last GridPP Technical Meeting decided to test disk-less operation at Oxford for CMS (GRIDPP-20) and LHCb (GRIDPP-21).
Tuesday 6 Oct
- UCL Vac site now running LHCb test of two payloads per dual processor VM. Total of dual processor VMs at UCL now 120.
Tuesday 29 Sep
- UCL Vac site updated with most recent version of Vac-in-a-Box. Now running ~216 jobs: LHCb MC and ATLAS certification jobs.
- Drawing up list of tasks needed to be able to run a site for GridPP-supported VOs purely using VMs (e.g. VM certification by experiments etc.)
- Discussion at GridPP Technical Meeting on storage options, including xrootd-based sites (i.e. xrootd not DPM/dCache)
|
Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06
|
Tuesday 3rd November
- APEL delay (normal state) Lancaster and Sheffield.
Tuesday 20th October
The WLCG MB decided to create a Benchmarking Task force led by Helge Meinhard see talk
Tuesday 22nd September
- Slight delay for Sheffield but overall okay - although there is a gap between today's date and the most recent update for all sites. Perhaps an APEL delay.
Monday 20th July
- Oxford publishing 0 cores from Cream today. Maybe they forgot to switch one off. Check here.
|
Documentation - KeyDocs
|
Friday 6th Nov, 2015
SteveJ: Advice to admins published about a common GSS error, globus_gsi_callback_module: Could not verify
credential etc.
https://www.gridpp.ac.uk/wiki/Security_system_errors_and_workarounds
Tuesday 20th October, 2015
Approved VOs document updated with temporary section for LZ
Tuesday 29nd September
Steve J: problems with voms server at fnal, voms.fnal.gov, have been detected; I will resolve them soon and may issue an update to Approved VOs, alerting sites with TB_SUPPORT should that occur. Approved VOs potentially affected are CDF, DZERO, LSST. Please do not act act yet.
Tuesday 22nd September
- Steve J is going to undertake some GridPP/documentation usability testing.
Tuesday 18th August
- Lydia's document - Setup a system to do data archiving using FTS3
Tuesday 28th July
- Ewan: /cvmfs/gridpp-vo help ... there's a lot of historical stuff on the GridPP wiki that makes it look a lot more complicated than it is now. We really should have a bit of a clear out at some point.
Tuesday 23rd June
- Reminder that documents need reviewing!
General note
See the worst KeyDocs list for documents needing review now and the names of the responsible people.
|
Interoperation - EGI ops agendas
|
Tuesday 10th November
Meeting yesterday was cancelled in favour of OMB; next meeting scheduled for 14/12.
|
Monitoring - Links MyWLCG
|
Tuesday 16th June
- F Melaccio & D Crooks decided to add a FAQs section devoted to common monitoring issues under the monitoring page.
- Feedback welcome.
Tuesday 31st March
Monday 7th December
|
On-duty - Dashboard ROD rota
|
Monday 23rd November
- Still seeing the alarms for systems at QMUL and RHUL that are not in production.
- Closed some tickets for the rolling availabilities. Confused by the rolling availability plots. It seems the "rolling average" period was cut back to 20 days.
Tuesday 3rd November
- The long-standing UCL availability alarm went green yesterday on 29th October. We are not sure why!
- Quite a lot of activity on the dashboard this week, but only one or two new tickets.
- Tickets: Five for availability / reliability: Sussex, Sheffield, Liverpool, Lancaster and UCL. Two for GLUE2 validation: Liverpool and QMUL. One for the CEs at QMU
Tuesday 20th October
Lots of bits here and there, but no big pattern. Tickets about CE and storage problems open at several sites. QMUL notable as going on for a while, probably with some kind of configuration problem they're not identifying.
|
Rollout Status WLCG Baseline
|
Tuesday 15th September
Tuesday 12th May
- MW Readiness WG meeting Wed May 6th at 4pm. Attended by Raul, Matt, Sam and Jeremy.
Tuesday 17th March
- Daniela has updated the [ https://www.gridpp.ac.uk/wiki/Staged_rollout_emi3 EMI-3 testing table]. Please check it is correct for your site. We want a clear view of where we are contributing.
- There is a middleware readiness meeting this Wednesday. Would be good if a few site representatives joined.
- Machine job features solution testing. Fed back that we will only commence tests if more documentation made available. This stops the HTC solution until after CHEP. Is there interest in testing other batch systems? Raul mentioned SLURM. There is also SGE and Torque.
References
|
Security - Incident Procedure Policies Rota
|
Tuesday 24th November
- Call on NGIs to participate in "Security Threat Risk Assessment - with Cloud Focus" work.
- Check Pakiti for CVE-2015-7183 issues.
Tuesday 17th November
- Advisory-SVG-2015-CVE-2015-7183 issued 06/11/2015: a few UK sites show as unpatched in EGI monitoring. WNs, as tested by the monitoring, may be less vulnerable than affected middleware services but they could be taken as an indication of general site readiness and sites are encouraged to check their status.
Monday 9th November
- EGI SVG Advisory - 'Critical' risk. Remote arbitrary code execution vulnerabilities in the core crypto library used by RedHat - Advisory-SVG-2015-CVE-2015-7183 All running resources based on Red hat and its derivatives MUST be patched by 2015-11-13
Tuesday 3rd November
- EGI SVG Advisory - Various Java CVE's with max CVSS score.
Monday 26th October
The EGI security dashboard.
|
|
Services - PerfSonar dashboard | GridPP VOMS
|
- This includes notifying of (inter)national services that will have an outage in the coming weeks or will be impacted by work elsewhere. (Cross-check the Tier-1 update).
Tuesday 6th October
Tuesday 14th July
- GridPP35 in September will have a part focus on networking and IPv6. This will include a review of where sites are with their deployment. Please try to firm up dates for your IPv6 availability between now and September. Please update the GridPP IPv6 status table.
|
Tickets
|
Monday 16th November 2015, 15.45 GMT
Only 17 Open UK tickets this week.
LSST FRIDAY 13th TEETHING TROUBLES
117585 (Liverpool)
117586 (Oxford)
Daniela has been testing out the LSST Dirac pilots - Ewan's fighting the good fight at Oxford getting his ARC to work, maybe Liverpool's problem has a similar root cause (we should be so lucky)?
TIER 1
116866 (12/10)
In a similar vein (and thanks again to Daniela for the effort with preparing our pilots) is this ticket about getting "other" pilots enabled at the Tier 1 - particularly with a view of these tests: http://www.gridpp.ac.uk/php/gridpp-dirac-sam.php?action=viewlcg Any news? On hold (6/11)
ECDF(-RDF)
I think this new atlas ticket:
117606 (15/11)
is a duplicate of this current atlas ticket:
117447 (8/11)
So I suspect it can be closed!
I see from 117642 that the other VO nagios has been recently rebooted which might explain the lot of failures I see, I'll check again before the meeting.
|
Tools - MyEGI Nagios
|
Tuesday 6 Oct 2015
Moved Gridppnagios instance back to Oxford from Lancaster. It was kind of double whammy as both sites went down together. Fortunately Oxford site was partially working so we managed to start SAM Nagios at Oxford. Sam tests were unavailable for few hours but no affect on egi availibilty/reliability. Sites can have a look at https://mon.egi.eu/myegi/ss/ for a/r status.
Tuesday 29 Sep 2015
Following an air-conditioning problem at machine room in Oxford Tier-2 site on 26 September, gridppnagios(OX) was shut down and gridppnagios(Lancs) became active instance. Oxford site is in downtime until 1st Oct and it may be extended depending on the situation.
VO-Nagios was also unavailable for two days but we have started it yesterday as it is running on a VM. VO-nagios is using oxford SE for replication test so it is failing those tests. I am looking to change to some other SE.
Tuesday 09 June 2015
- ARC CEs were failing nagios test becuase of non-availability of egi repository. Nagios test compare CA version from EGI repo. It started on 5th June and one of the IP addresses behind webserver was not responding. Problem went away in approximately 3 hours. The same problem started again on 6th June. Finally it was fixed on 8th June. No reason was given in any of the ticket opened regarding this outage.
Tuesday 17th February
- Another period where message brokers were temporarily unavailable seen yesterday. Any news on the last follow-up?
Tuesday 27th January
- Unscheduled outage of the EGI message broker (GRNET) caused a short-lived disruption to GridPP site monitoring (jobs failed) last Thursday 22nd January. Suspect BDII caching meant no immediate failover to stomp://mq.cro-ngi.hr:6163/ from stomp://mq.afroditi.hellasgrid.gr:6163/
|
VOs - GridPP VOMS VO IDs Approved VO table
|
Tuesday 19th May
- There is a current priority for enabling/supporting our joining communities.
Tuesday 5th May
- We have a number of VOs to be removed. Dedicated follow-up meeting proposed.
Tuesday 28th April
- For SNOPLUS.SNOLAB.CA, the port numbers for voms02.gridpp.ac.uk and voms03.gridpp.ac.uk have both been updated from 15003 to 15503.
Tuesday 31st March
- LIGO are in need of additional support for debugging some tests.
- LSST now enabled on 3 sites. No 'own' CVMFS yet.
|
Site Updates
|
Tuesday 24th February
- Next review of status today.
Tuesday 27th January
- Squids not in GOCDB for: UCL; ECDF; Birmingham; Durham; RHUL; IC; Sussex; Lancaster
- Squids in GOCDB for: EFDA-JET; Manchester; Liverpool; Cambridge; Sheffield; Bristol; Brunel; QMUL; T1; Oxford; Glasgow; RALPPD.
Tuesday 2nd December
- Multicore status. Queues available (63%)
- YES: RAL T1; Brunel; Imperial; QMUL; Lancaster; Liverpool; Manchester; Glasgow; Cambridge; Oxford; RALPP; Sussex (12)
- NO: RHUL (testing); UCL; Sheffield (testing); Durham; ECDF (testing); Birmingham; Bristol (7)
- According to our table for cloud/VMs (26%)
- YES: RAL T1; Brunel; Imperial; Manchester; Oxford (5)
- NO: QMUL; RHUL; UCL; Lancaster; Liverpool; Sheffield; Durham; ECDF; Glasgow; Birmingham; Bristol; Cambridge; RALPP; Sussex (14)
- GridPP DIRAC jobs successful (58%)
- YES: Bristol; Glasgow; Lancaster; Liverpool; Manchester; Oxford; Sheffield; Brunel; IC; QMUL; RHUL (11)
- NO: Cambridge; Durham; RALPP; RAL T1 (4) + ECDF; Sussex; UCL; Birmingham (4)
- IPv6 status
- Allocation - 42%
- YES: RAL T1; Brunel; IC; QMUL; Manchester; Sheffield; Cambridge; Oxford (8)
- NO: RHUL; UCL; Lancaster; Liverpool; Durham; ECDF; Glasgow; Birmingham; Bristol; RALPP; Sussex
- Dual stack nodes - 21%
- YES: Brunel; IC; QMUL; Oxford (4)
- NO: RHUL; UCL; Lancaster; Glasgow; Liverpool; Manchester; Sheffield; Durham; ECDF; Birmingham; Bristol; Cambridge; RALPP; Sussex, RAL T1 (15)
Tuesday 21st October
- High loads seen in xroot by several sites: Liverpool and RALT1... and also Bristol (see Luke's TB-S email on 16/10 for questions about changes to help).
Tuesday 9th September
- Intel announced the new generation of Xeon based on Haswell.
|
|