| WLCG Operations Coordination - AgendasWiki Page
Tuesday 7th December
- There was a WLCG operations coordination meeting last Thursday. Minutes | Agenda.
- Please register for the WLCG workshop.
- News: RedHat now fixing the openldap crash issue affecting Top BDII and ARC-CE. Stay tuned.
- T0: LSF 9 software deployed on all worker nodes
- T1: CC-IN2P3 & PIC: Globus host certificate validation changes
- T2: NTR
- ALICE: heavy ion run has been smooth from the grid perspective
- ATLAS: Tier-0 performance in terms of events/second reconstructed from the whole cluster are quite low (few tents of Hz), observed huge I/O wait in Wigner spinning disks nodes. Plan full reprocessing campaign start for the 14th of December.
- CMS: CMS Tier-0 workflows is driving some CERN Openstack hardware to its limits: GGUS:118056.
- LHCb: Significant MC generation in-coming. MC simulation workflows have been executed successfully on commercial clouds, on both DBCE (up to 600 simultaneous jobs running) and Azure.
- glexec: NTR
- M/J features: Ongoing discussions clarifying key/value pairs. Next steps to review experience with implementations and installations, and update in view of technical note discussions.
- HTTP TF: NTR
- Infro sys future: Future Use Cases Document is now ready in the WLCG Document Repository (PDF). Looking at information system owned by WLCG (an interesting idea). Starting to prepare a Roadmap to GLUE 2.0.
- IPv6: VOMS still does not work with IPv6. ARGUS not really tested but no problem expected as Java has good IPv6 support.
- MW readiness: Virtual meeting on 2nd December. Verification workflows in progress listed here.
- MC: NTR
- Network/transfer metrics: NTR
- RFC proxies: CMS have switched test pilot factories to RFC proxies.
- Squid monitoring/HTTP discovery: NTR
Monday 30th November
| Tier-1 - Status Page
Tuesday 8th December
A reminder that there is a weekly Tier-1 experiment liaison meeting. Notes from the last meeting here
- We are investigating why LHCB batch jobs sometimes fail to write results back to Castor (and the sometimes fail to write remotely as well). A recent change has improved, but not fixed, this problem.
- We are continuing with the detailed network changes needed to remove our old core switch from the network. This work should be completed this week. As part of this we have moved the link from our core network to the "UKLight" router (that provides our external data path) this morning. There is another change that affects Castor tomorrow morning and a 'warning' has been declared in the GOC DB.
- As reported last week, the tenders for the next round of Disk and CPU purchases are now out.
- We have seen some increased data transfer rates to/from the Tier1. We have a plan to increase the bandwidth on the bypass link (to Tier2s) from 10 to 20 Gbit.
- The new algorithm for the draining of worker nodes to make space for multi-core jobs has been rolled out to one batch of worker nodes. The new version allows "pre-emptable" jobs (ie. jobs that can be stopped at short notice) to run in the job slots until they are needed.
| Storage & Data Management - Agendas/Minutes
Wednesday 02 Dec, not brought to you from the Brazilian Jungle.
- Implications of diskless T2s (aka T2C) - ongoing discussion - need results from tests soon...
- Several sites with old/decyaing/decommissioned hardware
- Namespace dump testing updates
Wednesday 18 Nov
- New member from Edinburgh!
- Updates on PRODDISK cleanup at T2s, catalogue synchronisation (aka syncat), and what does "world readable" mean?
- Summary of EGI community forum - interoperation and standards for moving data
Wednesday 11 Nov
Wednesday 04 Nov
- Brian presented GridPP achievements at the E2E workshop
- Storage accounting with EGI
Wednesday 28 Oct
- Summary of "UK T0" workshop - GridPP well represented
- Sites should not upgrade to DPM 1.8.10 just yet
| Tier-2 Evolution - GridPP JIRA
Tuesday 8 Dec
- Liverpool has set up Vac with 126 VM slots. LHCb production MC and GridPP SAM tests running.
- GridPP DIRAC VMs now working with new dirac.gridpp.ac.uk service: needed an ad-hoc configuration value adding (/Resources/Computing/CEDefaults/VirtualOrganization=gridpp) for the pilots due to the multi-VO support (GRIDPP-9)
- For rollout, would like to set up a JIRA component for each site (several exist already.) Will be mailing sites to get the site contacts involved before we add each site.
Tuesday 1 Dec
- Vac (0.20pre) now provides contexualization, metadata (EC2/OpenStack), and Machine/Job Features via HTTP rather than ISO image and NFS. (GRIDPP-27). This should allow VMs expecting to be run by OpenStack to work on Vac without modification.
- CernVM image signature checking, and APEL Sync record generation (GRIDPP-10) are now also in Vac 0.20pre.
Tuesday 24 Nov
- Cloud Init demonstrated with modified version of (old) GridPP DIRAC VMs
- Cloud Init support in Vac 0.20pre (GRIDPP-27)
- Progress on VMs for new GridPP DIRAC service: multi-VO config currently preventing matching. (GRIDPP-9)
Tuesday 17 Nov
- New depo.gridpp.ac.uk service for uploading files to via HTTPS
- ATLAS VMs now upload log files to depo.gridpp.ac.uk for debugging (GRIDPP-24)
Monday 9 Nov
- Vac 0.19.0 released last week: new VacQuery protocol
- Prototyping Vcycle(/Vac) GLUE2 publishing: talk at WLCG InfoSys Evolution TF on Thursday
- Please now use https://repo.gridpp.ac.uk/vacproject/ URLs for user_data and cernvm3.iso files (in preparation for move to new www.gridpp.ac.uk website.)
- See Laurence's GDB talk "Helix Nebula Price Enquiry Results" for more use of Vcycle
Tuesday 3 Nov
- LHCb prototype of GOCDB pointers to resource BDII done
- T2C tests at Oxford ongoing
| Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06
Tuesday 24th November
- Slight delay for Sheffield.
Tuesday 3rd November
- APEL delay (normal state) Lancaster and Sheffield.
Tuesday 20th October
The WLCG MB decided to create a Benchmarking Task force led by Helge Meinhard see talk
| Documentation - KeyDocs
Tuesday 1st December
- Sixt and hone have been removed from the GridPP list.
Tuesday 24th November
Steve J: problems with voms server at fnal, voms.fnal.gov, resolved. Approved VOs document updated with newest records for those VOs affected, CDF, DZERO, LSST. Also, note changes to CA_DN for PLANCK and CDF. https://www.gridpp.ac.uk/wiki/GridPP_approved_VOs
Friday 6th Nov, 2015
SteveJ: Advice to admins published about a common GSS error, globus_gsi_callback_module: Could not verify
Tuesday 20th October, 2015
Approved VOs document updated with temporary section for LZ
Tuesday 29nd September
Steve J: problems with voms server at fnal, voms.fnal.gov, have been detected; I will resolve them soon and may issue an update to Approved VOs, alerting sites with TB_SUPPORT should that occur. Approved VOs potentially affected are CDF, DZERO, LSST. Please do not act act yet.
See the worst KeyDocs list for documents needing review now and the names of the responsible people.
| Interoperation - EGI ops agendas
Tuesday 10th November
Meeting yesterday was cancelled in favour of OMB; next meeting scheduled for 14/12.
| On-duty - Dashboard ROD rota
Monday 30th November
- A quiet week, mostly just cloing alarms for not-in-production services. There are no open ROD tickets.
Monday 23rd November
- Still seeing the alarms for systems at QMUL and RHUL that are not in production.
- Closed some tickets for the rolling availabilities. Confused by the rolling availability plots. It seems the "rolling average" period was cut back to 20 days.
Tuesday 3rd November
- The long-standing UCL availability alarm went green yesterday on 29th October. We are not sure why!
- Quite a lot of activity on the dashboard this week, but only one or two new tickets.
- Tickets: Five for availability / reliability: Sussex, Sheffield, Liverpool, Lancaster and UCL. Two for GLUE2 validation: Liverpool and QMUL. One for the CEs at QMU
Tuesday 20th October
Lots of bits here and there, but no big pattern. Tickets about CE and storage problems open at several sites. QMUL notable as going on for a while, probably with some kind of configuration problem they're not identifying.
| Services - PerfSonar dashboard | GridPP VOMS
- This includes notifying of (inter)national services that will have an outage in the coming weeks or will be impacted by work elsewhere. (Cross-check the Tier-1 update).
Tuesday 6th October
Tuesday 14th July
- GridPP35 in September will have a part focus on networking and IPv6. This will include a review of where sites are with their deployment. Please try to firm up dates for your IPv6 availability between now and September. Please update the GridPP IPv6 status table.
Monday 7th December 2015, 12.30 GMT
Apologies from Matt - we're picking ourselves up from the weekend's events up here in Lancaster.
31 Open UK tickets, 13 of these are atlas consistency checking tickets.
Link to all the GGUS tickets.
Of general interest is this atlas ticket: 118130
Where atlas have noticed large numbers of transfer and staging errors for the cloud.
Full review next week!
| Tools - MyEGI Nagios
Monday 30th November
- The SAM/ARGO team has created a document describing Availability reliability calculation in ARGO tool.
Tuesday 6 Oct 2015
Moved Gridppnagios instance back to Oxford from Lancaster. It was kind of double whammy as both sites went down together. Fortunately Oxford site was partially working so we managed to start SAM Nagios at Oxford. Sam tests were unavailable for few hours but no affect on egi availibilty/reliability. Sites can have a look at https://mon.egi.eu/myegi/ss/ for a/r status.
Tuesday 29 Sep 2015
Following an air-conditioning problem at machine room in Oxford Tier-2 site on 26 September, gridppnagios(OX) was shut down and gridppnagios(Lancs) became active instance. Oxford site is in downtime until 1st Oct and it may be extended depending on the situation.
VO-Nagios was also unavailable for two days but we have started it yesterday as it is running on a VM. VO-nagios is using oxford SE for replication test so it is failing those tests. I am looking to change to some other SE.
| VOs - GridPP VOMS VO IDs Approved VO table
Tuesday 19th May
- There is a current priority for enabling/supporting our joining communities.
Tuesday 5th May
- We have a number of VOs to be removed. Dedicated follow-up meeting proposed.
Tuesday 28th April
- For SNOPLUS.SNOLAB.CA, the port numbers for voms02.gridpp.ac.uk and voms03.gridpp.ac.uk have both been updated from 15003 to 15503.
Tuesday 31st March
- LIGO are in need of additional support for debugging some tests.
- LSST now enabled on 3 sites. No 'own' CVMFS yet.
| Site Updates
Tuesday 24th February
- Next review of status today.
Tuesday 27th January
- Squids not in GOCDB for: UCL; ECDF; Birmingham; Durham; RHUL; IC; Sussex; Lancaster
- Squids in GOCDB for: EFDA-JET; Manchester; Liverpool; Cambridge; Sheffield; Bristol; Brunel; QMUL; T1; Oxford; Glasgow; RALPPD.
Tuesday 2nd December
- Multicore status. Queues available (63%)
- YES: RAL T1; Brunel; Imperial; QMUL; Lancaster; Liverpool; Manchester; Glasgow; Cambridge; Oxford; RALPP; Sussex (12)
- NO: RHUL (testing); UCL; Sheffield (testing); Durham; ECDF (testing); Birmingham; Bristol (7)
- According to our table for cloud/VMs (26%)
- YES: RAL T1; Brunel; Imperial; Manchester; Oxford (5)
- NO: QMUL; RHUL; UCL; Lancaster; Liverpool; Sheffield; Durham; ECDF; Glasgow; Birmingham; Bristol; Cambridge; RALPP; Sussex (14)
- GridPP DIRAC jobs successful (58%)
- YES: Bristol; Glasgow; Lancaster; Liverpool; Manchester; Oxford; Sheffield; Brunel; IC; QMUL; RHUL (11)
- NO: Cambridge; Durham; RALPP; RAL T1 (4) + ECDF; Sussex; UCL; Birmingham (4)
- IPv6 status
- Allocation - 42%
- YES: RAL T1; Brunel; IC; QMUL; Manchester; Sheffield; Cambridge; Oxford (8)
- NO: RHUL; UCL; Lancaster; Liverpool; Durham; ECDF; Glasgow; Birmingham; Bristol; RALPP; Sussex
- Dual stack nodes - 21%
- YES: Brunel; IC; QMUL; Oxford (4)
- NO: RHUL; UCL; Lancaster; Glasgow; Liverpool; Manchester; Sheffield; Durham; ECDF; Birmingham; Bristol; Cambridge; RALPP; Sussex, RAL T1 (15)
Tuesday 21st October
- High loads seen in xroot by several sites: Liverpool and RALT1... and also Bristol (see Luke's TB-S email on 16/10 for questions about changes to help).
Tuesday 9th September
- Intel announced the new generation of Xeon based on Haswell.