|
|
Line 483: |
Line 483: |
| ===== ===== | | ===== ===== |
| <!-- ******************Edit start********************* -----> | | <!-- ******************Edit start********************* -----> |
− | '''Tuesday 20th December 2016, 10.00 GMT'''<br />
| |
− | 22 Open UK Tickets this week
| |
| | | |
− | '''The Tickets of Christmas Prodding.'''<br />
| |
− | These tickets could do with an update before we head off for our tofukey dinners:
| |
− |
| |
− | [https://ggus.eu/?mode=ticket_info&ticket_id=125169 125169] (24/11)<br />
| |
− | BIRMINGHAM ticket, regarding small VOs and their batch system. Daniela had to reopen the ticket, which I think has meant it snuck by Mark. Reopened (14/12)
| |
− |
| |
− | [https://ggus.eu/?mode=ticket_info&ticket_id=122771 122771] (11/7)<br />
| |
− | BIRMIGNHAM xrootd/http endpoint ticket. Atlas have tested xrdcp and failed, but have asked if the full path is exported. Could do with a reply before Christmas. In progress (15/12)
| |
− |
| |
− | [https://ggus.eu/?mode=ticket_info&ticket_id=125503 125503] (9/12)<br />
| |
− | SUSSEX ticket, from Sno+ regarding file access. I think the most obvious input from us would be to advise Sno+ to stop using lcg-cp! In progress (15/12)
| |
− |
| |
− | [https://ggus.eu/?mode=ticket_info&ticket_id=125320 125320] (2/12)<br />
| |
− | RALPP nagios ticket, Chris was going to install the machine offending his dcache last week. Any joy? I'm being finicky here as this ticket is already On Hold (13/12)
| |
− |
| |
− | [https://ggus.eu/?mode=ticket_info&ticket_id=124848 124848] (4/11)<br />
| |
− | RHUL CMS ticket - I'm just checking that all is chugging along on this front, there's a lot of CMS chatter on it. In progress (19/12)
| |
− |
| |
− | [https://ggus.eu/?mode=ticket_info&ticket_id=125613 125613] (100IT)<br />
| |
− | [https://ggus.eu/?mode=ticket_info&ticket_id=125492 125492] (QMUL)<br />
| |
− | Finally, two ROD tickets waiting for input - but I'm thinking that answering the queries might require extra input from outside.
| |
− |
| |
− | '''The Tickets of Christmas Solved.'''<br />
| |
− | These look to me like they can be wrapped up for the holiday season.
| |
− |
| |
− | [https://ggus.eu/?mode=ticket_info&ticket_id=125157 125157] (24/11)<br />
| |
− | TIER 1 ticket, extras-fp7.eu cvmfs setup. It seems to be going well, at last check we were just waiting on the statum 1s to be replicated. I might be jumping the gun a bit saying this ticket is closed, but it's almost there. In Progress (7/12)
| |
− |
| |
− | [https://ggus.eu/?mode=ticket_info&ticket_id=124606 124606] (24/10)<br />
| |
− | TIER 1 CMS consistency checking ticket. Andrew L has done all that was asked of him, I wouldn't be surprised if the CMS side has disappeared for the holidays so I would say you could close this ticket. Waiting fore reply (9/12)
| |
− |
| |
− | [https://ggus.eu/?mode=ticket_info&ticket_id=124758 124758] (1/11)<br />
| |
− | EDINBURGH nagios availability ticket, the numbers are all green so it can be closed. Waiting for reply(?) (15/12)
| |
− |
| |
− | '''The Tickets of Christmas On Hold.'''<br />
| |
− | These tickets could do with being On Held, or at least have a note on them saying "Will get to this next year".
| |
− |
| |
− | [https://ggus.eu/?mode=ticket_info&ticket_id=125480 125480] (9/12)<br />
| |
− | TIER 1 ticket regarding CPU publishing - I doubt even Andrew L isn't hardcore enough to roll out something new this side of Yuletide, so this could do with being On Held until next year. In progress (12/12/)
| |
− |
| |
− | (Thanks to sites who On Held those bothersome AFS tickets).
| |
| | | |
| <!-- ******************Edit stop********************* -----> | | <!-- ******************Edit stop********************* -----> |
General updates
|
Monday 9th January
Tuesday 20th December
Monday 12th December
- January pre-GDB on Networking (Sites and Operations session input requested)
- We have received notification of the November T2 R/A figures.
- ALICE. All okay
- ATLAS
- RHUL 89%:91%
- Glasgow 85%:85%
- CMS. All okay
- LHCb
- Liverpool 33%:33%
- Glasgow 28%:28%
- Explanations:
- RHUL: There was a DPM database move scheduled during the month.
- LHCb results were poor (for Liverpool and Glasgow) due to SRM tests which were false positives. This has been corrected and re-computations are being requested.
- Glasgow (ATLAS): 1-2 November - Downtime due to Power failure. 7 November - DPM pool node disk042 caused issues with DPM headnode. 17-18 November - DPM pool nodes disk042/disk070 caused issues with DPM headnode freezing SRM.
- The EGI Security Policy Group has produced a revised draft version of the top-level Security Policy bringing the document up to date in terms of terminology and with the current set of security policy documents.
- GridPP DIRAC job limits and other VO usage of resources.
- Resource utilisation - can we distill the main drivers?
|
WLCG Operations Coordination - AgendasWiki Page
|
Monday 5th December
- There was a WLCG ops coordination meeting on 1st: Agenda | Minutes
Monday 7th November
Tuesday 25th October
Monday 3rd October
- There was a WLCG ops coordination meeting last week: Agenda. (Good to review in the ops meeting).
Monday 26th September
Monday 19th September
|
Tier-1 - Status Page
|
Tuesday 10th January
A reminder that there is a weekly Tier-1 experiment liaison meeting. Notes from the last meeting here
- Generally steady operations over the holiday break.
- The upgrade of Castor to version 2.1.15 will cause a series of outages as announced via the GOC DB. The first one is today (10th Jan).
- CMS reported poor job efficiencies across their Tier1 sites. We have been seeing a steady rate of SAM test failures for CMS.
|
Storage & Data Management - Agendas/Minutes
|
Wednesday 21 Dec
- Report from Computing Insights UK (nee MEW)
- Highlights and lowlights of the year (storage and data management only - which seems to have fared better than the world in general)
Wednesday 07 Dec
- Good GridPP presence at the CLOUD-SIG workshop at Crick last week. Real Work™ included GridFTP-endpoint-in-the-cloud and Jupyter notebooks. How do you get data into a Jupyter notebook (from the grid)?
- Opportunity to do more with data transfer between infrastructures.
Wednesday 23 Nov
- A couple of operational issues; ATLAS data mover triggering lack of checksum support in xroot in Edinburgh DPM
Wednesday 16 Nov
- Some curious operational issues with failed xroot transfers - need more testing
- Duncan provided feedback from the JISC data transfer/campus networking workshop
|
Tier-2 Evolution - GridPP JIRA
|
Thursday 5 January 2017
- Vac 02.00 released, with SuperSlots mechanism for handling a mixture of different VM sizes on the same hypervisors.
Monday 2 January 2017
Wednesday 14 December
- LHCb has added Vac resources to the LHCb_CRITICAL profile used in the SAM3 dashboard and WLCG site A&R reports. (GRIDPP-5)
Monday 12 December
- ALICE VMs in production at Manchester. Other Vac sites invited to install it. (No VOBOX required...)
|
Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06
|
Monday 14th November
- Alessandra has written an FAQ to extract numbers from ATLAS and APEL avoiding the SSB.
Monday 26th September
- A problem with the APEL Pub and Sync tests developed last Tuesday and was resolved on Wednesday. This had a temporary impact on the accounting portal.
|
Interoperation - EGI ops agendas
|
Monday 7th November
- There was an EGI Ops meeting today: agenda
- UMD 3.14.5 released today
- VOMS 3.5.0, which makes RFC proxies the default for voms-proxy-init
- UMD 4.3.0 'October' release, release candidate ready, to be released by end of this week, including:
- ARC, GFAL2, XROOT, Davix, dCache, ARGUS, Gridsite, edg-mkgrid, umd-release for CentOS7
- please start using UMD4/SL6 or UMD4/CentOS7 instead of UMD3/SL6 & please don't use anymore EMI3
- (think there may be a campaign around this soon)
- Downtimes due to the vulnerability CVE-2016-5195: request an A/R recomputation
- All the resource centres that were affected by the vulnerability CVE-2016-5195 and that declared a downtime between 2016-10-20 16:00 UTC and 2016-10-31 18:00 UTC are invited to request a recomputation of A/R figures for the days in which the downtime was ongoing.
- ARGO proposal to use GOCDB as the only source of topology information
- VAPOR 2.1 released in September, it replaces GSTAT
|
Monitoring - Links MyWLCG
|
Tuesday 1st December
Tuesday 16th June
- F Melaccio & D Crooks decided to add a FAQs section devoted to common monitoring issues under the monitoring page.
- Feedback welcome.
Tuesday 31st March
Monday 7th December
|
On-duty - Dashboard ROD rota
|
Monday 9th January
- Daniela posted on 25th Dec to say: "there's quite a lot of tickets - I tried to extend them all into January.
Otherwise apparently the grid is still standing."
Monday 5th December
- Another fairly quiet week. There were a few new tickets, which are mostly now resolved. There was an intermittent SRM alarm at RALPP which sporadically appeared then disappeared; it finally hung around long enough today to raise a ticket.
Monday 21st November
- EFDA low-availability (egi.eu.lowAvailability-/EFDA-JET@EFDA-JET_Availability) ticket finally removed!
|
Rollout Status WLCG Baseline
|
Tuesday 7th December
- Raul reports: validation of site BDII on Centos 7 done.
Tuesday 15th September
Tuesday 12th May
- MW Readiness WG meeting Wed May 6th at 4pm. Attended by Raul, Matt, Sam and Jeremy.
References
|
Security - Incident Procedure Policies Rota
|
Tuesday 20th December
- CVE-2016-7117 downgraded from CRITICAL to HIGH by EGI SVG [1]
- One, non-UK incident advisory from EGI CSIRT affecting one site. Not known to have affected other sites but more details expected after further analysis.
Tuesday 13th December
- CVE-2016-7117 still waiting for distro. kernel updates
- EGI Dashboard hint: to display a recent history of state records -
- Starting from the "Issues" page for a site
- Select right-hand-upper drop-down “Issues” and select “History”
- Click the required test in the table shown e.g. ARC-Pakiti-Check
- Select "Records" from the upper-menu
Tuesday 6th December
- One user suspension (and subsequent release) in Argus due to an EGI FedCloud incident. Not know to affect UK.
- CVE-2016-7117 still waiting for distro. kernel updates
- Next Security Team meeting on the 13th Dec.
- Certificate has expired on pakiti.egi.eu 5/12/2016 [2]
Monday 21st November
- The IGTF will release a regular update to the trust anchor repository (1.79) on Monday, Nov 28th 2016.
- We are still seeing sites pop-up in the dashboard in relation to EGI-SVG-2016-11476.
- CVE-2016-7117 still waiting for distro. kernel updates
The EGI security dashboard.
|
|
Services - PerfSonar production dashboard |PerfSonar development dashboard | GridPP VOMS
|
- This includes notifying of (inter)national services that will have an outage in the coming weeks or will be impacted by work elsewhere. (Cross-check the Tier-1 update).
Tuesday 25th October
- Duncan has recreated the UK perfSONAR mesh. Link here!
Monday 19th September
- UK eScience CA - certificate issuance problems. Jens reported that on 15th a partial but significant database corruption occurred on the signing system for the CA. Data was restored from (offline) backups but the rebuild was not correctly configured.
- A large number of site admins and other GridPP supporters appeared to be suspended from the dteam VO last week. “During a planned upgrade operation of VOMS service, a system malfunction occurred. As a result, some users received false notification about membership expiration. We are in contact with the software development team in order to identify the cause.”
|
Tools - MyEGI Nagios
|
29th November 2016
LSST monitoring is available through vo-nagios
https://vo-nagios.physics.ox.ac.uk/nagios/cgi-bin/status.cgi?servicegroup=lsst&style=detail
13th September 2016
19th July
Both instances of gridppnagios at Oxford and Lancaster has been decommissioned.
12th July 2016
Central ARGO monitoring service has started from 1st of July. All grid resources are monitored through two Nagios instances
https://argo-mon.egi.eu/nagios/
https://argo-mon2.egi.eu/nagios/
It has same interface as gridppnagios. Alarms from these instances goes to Operational Dashboard
http://argo.egi.eu/ is a web interface which provides availability/reliability figures and site status. It is equivalent of old myegi interface with some additional services.
I am planning to decommission both instances of gridppnagios in coming weeks. I have stopped nagios and httpd on both instances so it will not send tests to grid resources in UK. I will also decommission storage-monit.physics.ox.ac.uk which was only used for storage replication test.
We will keep vo-nagios.physics.ox.ac.uk running until we get a replacement for vo-monitoring.
Monday 13th June
- Active Nagios instance moved to Lancaster
Tuesday 5th April 2016
Oxford had a scheduled network warning so active nagios instance was moved from Oxford to Lancaster. I am not planning to move it back to Oxford for the time being.
Tuesday 26th Jan 2016
One of the message broker was in downtime for almost three days. Nagios probes picks up a random message broker and failover is not working so a lot of ops jobs hanged for long time. Its a known issue and unlikely to be fixed as SAM Nagios is in its last leg. Monitoring is moving to ARGO and many things are not clear at the moment.
Monday 30th November
- The SAM/ARGO team has created a document describing Availability reliability calculation in ARGO tool.
|
VOs - GridPP VOMS VO IDs Approved VO table
|
Tuesday 19th May
- There is a current priority for enabling/supporting our joining communities.
Tuesday 5th May
- We have a number of VOs to be removed. Dedicated follow-up meeting proposed.
Tuesday 28th April
- For SNOPLUS.SNOLAB.CA, the port numbers for voms02.gridpp.ac.uk and voms03.gridpp.ac.uk have both been updated from 15003 to 15503.
Tuesday 31st March
- LIGO are in need of additional support for debugging some tests.
- LSST now enabled on 3 sites. No 'own' CVMFS yet.
|
Site Updates
|
Tuesday 23rd February
ALICE:
All okay.
RHUL 89%:89%
Lancaster 0%:0%
RALPP: 80%::80%
RALPP: 77%:77%
- RHUL: The largest problem was related to the SRM. The DPM version was upgraded and it took several weeks to get it working again (13 Jan onwards). Several short-lived occurrences of running out of space on the SRM for non-ATLAS VOs. For around 3 days (15-17 Jan) the site suffered from a DNS configuration error by their site network manager which removed their SRM from the DNS, causing external connections such as tests and transfers to fail. For one day (25 Jan) the site network was down for upgrade to the 10Gb link to JANET. Some unexpected problems occurred extending the interruption from an hour to a day. The link has been successfully commissioned.
- Lancaster: The ASAP metric for Lancaster for January is 97.5 %. There is a particular problem with ATLAS SAM tests which doesn’t affect the site activity in production and analysis and this relates to the path name being too long. A re-calculation has been performed.
- RALPP: Both CMS and LHCb low figures are due to specific CMS jobs overloading the site SRM head node. The jobs should have stopped now.
|
|