Operations Bulletin 190813

From GridPP Wiki
Revision as of 08:54, 19 August 2013 by Jeremy coles (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Bulletin archive


Week commencing 12th August 2013
Task Areas
General updates

  • An EMI-2 SL6 tarball WN has been put into CVMFS thanks to Matt and Jakob (see GGUS:96030 for more information).

Monday 12th August

  • Redirector problems affecting many UK ATLAS sites.
  • EGI reliability/availability of UK core services for July 2013 (i.e. top BDII) is recorded as good (almost 100%).
  • EGI also monitors site availability/reliability (results akin to the WLCG numbers) and the figures are available via these tables. For July use this link.
  • WLCG operations generally quiet (Monday's summary). Main UK issue is CMS Caltech-RAL transfers - under invesitgation.
  • A reminder that registration for GridPP31 is now open. The meeting will be at Imperial on 24th and 25th September.


Tuesday 30th July

  • GridPP news: Tomorrow is the last day that Stuart will be with GridPP ... thanks Stuart for your contributions! Coincidentally, Neasan will also be moving on after tomorrow so again thanks are due. Good luck to both!
  • GLUE2 BDII output at Liverpool - restart fixed the problem. Midmon was reporting CRITICAL - errors 1068, warnings 1071, info 1299.
  • All UK ATLAS sites now managed by RAL FTS3 though not all site transfers use it at the moment. QMUL has an issue due to Storm.
  • A reminder that (EGI/NGI) operations procedures for certain key tasks can be found linked from here in the EGI wiki. In particular PROC13 lists the steps we are expected to follow when decommissioning a VO (something we will be doing shortly).
  • EGI is setting up a task force to explore CVMFS as a service for all EGI VOs. This follow's Ian's talk at the Manchester Community Forum in April. Catalin will be leading the task force.
  • Early bird registration for the EGI Technical Forum will now end at midnight on Sunday 4th August 2013.
  • The Tier-1 will cease operation of the RAL AFS cell on the 31st October this year.
WLCG Operations Coordination - Agendas

Monday 12th August

  • There have been no recent meetings. The next is on 29th August.

Tuesday 16th July

  • SL6
    • EMI-3 voms-proxy-info: 3rd problem java eating away memory. You can follow the story in both tickets GGUS 94878 and GGUS 95574
      • A fix is in the testing repositories and has been tested at Liverpool and Oxford.
    • UK status: 4 sites online, 3 testing, 7 with a plan, 3 without a plan (UCL, Durham, RALPP).
    • Presentation today at Atlas ADC weekly
    • Checking now with sites how LHCb is doing. Not running everywhere it seems.
  • Monitoring
  • Next Coord meeting Thursday 18/7/2013

Tuesday 9th July

  • SL6
    • Atlas new sw validation system scalability problem has been solved.
    • voms are now in the EMI-3 repository. No testing or prod PT repositories are necessary.
    • UK status: 3&1/2 sites online, 3 testing, 7 with a plan, 4 without a plan (UCL, Durham, RALPP, SUSX).
    • HS06: T0 tests on the compilers didn't give significant differences. Hepix has started an SL6 HS06 page where sites are welcome to post their results SL6 HS06 benchmark results
  • Monitoring
    • WLCG Monitoring consolidation group to consolidate the WLCG monitoring. It doesn't include all the monitoring there is a portion developed by experiments which is not included, but it concerns well known dashboards.
      • WLCG monitoring Initial status.
      • First meeting last week. The experiments have already given a first evaluation, sites will be represented via WLCG Ops Coordination. To get feedback from sites a group has been setup to collect sites opinion (see Maria's slide). Who is interested should contact Pepe Flix (jflix@NOSPAMpic.es). David Crooks and Kashif might want to be part of it as this touches on the GridPP core tasks.
    • Among things interesting to discuss
      • myWLCG vs SUM tests they both get the information from the same source i.e. nagios.
      • Personalised dashboard looks interesting but was never publicized much.
      • Sites monitoring requirements: SUM tests not representing the real experiment status for example.
Tier-1 - Status Page

Tuesday 13th August

  • Investigating problems (timeouts) on cms_tape.
  • There may be an At Risk this week when an intervention (fan replacement) takes place on the UPS. (Details to be confirmed).
  • Testing of alternative batch system (Condor/ARC CEs/Sl6) proceeding.
Storage & Data Management - Agendas/Minutes

Tuesday 15th July

  • Three sites are now run all UK FTS traffic via 'FTS3' service as a test. Mostly successful; (small issue with a few US sites to be resolved before taking tests further.)

Tuesday 28th May

  • The 'Big Data' agenda is being compiled here. There is also now a suggestion for a cross disciplinary clouds and virtualisation workshop in July - the idea is 'in progress' but no more detail is yet available.


Accounting - UK Grid Metrics HEPSPEC06 Atlas Dashboard HS06

Tuesday 13th August

Tuesday 23rd July

  • Sites moving to SL6 are reminded of the need to re-benchmark their WNs. Some sites have updated the wiki already and provide an idea of the performance change.
  • There is an ongoing PMB discussion about the timeline for the next Tier-2 hardware tranche. Please let Pete or Jeremy know if your site will benefit from a spend this financial year.

Tuesday 30th April

  • A discussion is starting about how to account/reward disk that is reallocated to LHCb. By way of background, LHCb is changing its computing model to use more of Tier-2 sites. They plan to start with a small number of big/good T2 sites in the first instance, and commission them as T2-Ds with disk. Ideally such sites will provide >300TB but for now may allocate 100TB and build it up over time. Andrew McNab is coordinating the activity for LHCb. (Note the PMB is already aware that funding was not previously allocated for LHCb disk at T2s).

Tuesday 12th March

  • APEL publishing stopped for Lancaster, QMUL and ECDF

Tuesday 12th February

  • SL HS06 page shows some odd ratios. Steve says he now takes "HS06 cpu numbers direct from ATLAS" and his page does get stuck every now and then.
  • An update of the metrics page has been requested.
Documentation - KeyDocs

See the worst KeyDocs list for documents needing review now and the names of the responsible people.

Tuesday 13th August

  • Due to various staff changes/moves there are now a non-negligible number of documents that need new 'owners'. We will reallocate at the next core tasks meeting.

Tuesday 23rd July

  • Only minor updates to the keydocs mentioned last week as in need of attention/review. Please could everyone review the documents for which they are responsible.

Tuesday 16th July

  • Many key docs have reached their validity limit and need reviewing.


Tuesday 30th April

Tuesday 9th April

  • Please could those responsible for key documents start addressing the completeness of documents for which they are responsible? Thank you.

Tuesday 26th February

KeyDocs monitoring status: Grid Storage(7/0) Documentation(3/0) On-duty coordination(3/0) Staged rollout(3/0) Ticket follow-up(3/0) Regional tools(3/0) Security(3/0) Monitoring(3/0) Accounting(3/0) Core Grid services(3/0) Wider VO issues(3/0) Grid interoperation(3/0) Cluster Management(1/0) (brackets show total/missing)

Interoperation - EGI ops agendas

Monday 22nd July

  • Problems with recent versions of VOMS, WMS, UI and Storm. New release of dCache that supports SHA-2 proxies.
  • S/w releases: dCache 2.6.5; which has support for SHA-2 certs. Backport of SHA-2 support to 2.2.* series expected end of the month.
  • S/w issues: VOMS server doesn't (always) start the resource BDII automatically. This makes the SHA-2 probe fail, because there's no entry in the info system for the VOMS in many cases. There was some discussion on how to handle this - as it gives a false positive. The probes will be removed for the moment; with the expectation of a voms update to fixe the BDII problem in short order. Once that's done, then the alarms will be re-enabled. If no quick fix is forthcoming, then this will be looked at again (and put in abeyance). Therefore: RoD will be able to close the SHA-2 probe on VOMS servers; if they think it's the right way to handle it; or, if there is a ticket already open, then the ticket can be left open until resolved.
  • Gridsite problem: This affects UI, WMS and LB. The latest version of Gridsite breaks on proxies with '-' in it, which is seen as intermittent fails when attempting to delegate proxies.. A workaround is to yum downgrade gridsite gridsite-libs on the WMS or yum downgrade gridsite-commands gridsite-libs on the UI.
  • Storm: Performance problems with current version of Storm. This means that the current version isn't certified - hence the SHA-2 tests for Storm are off, because there is not a certified version of Storm that supports SHA-2.
gLite support calendar.


Monitoring - Links MyWLCG

Tuesday 23rd July

Tuesday 18th June

  • David C is taking feedback on the Graphite implementation presented at the HEPSYSMAN meeting. Also considering integrating Site Nagios.
  • Glasgow dashboard now packaged and can be downloaded here.
On-duty - Dashboard ROD rota

Tuesday 23rd July

  • Looking at options to supplement ROD team. Tier-1 may provide some effort.
  • The Operations Dashboard is full of SHA2 critical alarms. As most of the sites are failing one or more SHA2 tests, tickets are being created against most sites. These alarms are generated by Midmon Nagios and it checks if the service endpoint is SHA2 compliant or not. A list of SHA-2 ready middleware has been produced as has a summary of the related SAM tests. Most of the alarms are related to the creamce. The easiest way to solve this issue is to upgrade to the latest EMI2 or EMI3 release. The baseline release for the creamce is update 10 released in EMI on 2nd April 2013.
Rollout Status WLCG Baseline

Tuesday 9th July

  • New EMI2 and EMI3 release yesterday. No staged rollout requests yet. Imperial upgraded their WMS and they have been somewhat shaky ever since.

Tuesday 18th June

  • New EMI3 CE coming into SR. Liverpool will test.
  • A lot of EMI3 testing done at Brunel.
  • EMI-3 testing page contains all issues I am aware off. It's a Wiki though, so if you find an issue, please put it in the appropriate category.

Tuesday 14th May

  • A reminder. Please could sites fill out the EMI-3 testing contributions page. This is for all testing not just SR sites as we want to know which sites have experience with each component.

References


Security - Incident Procedure Policies Rota

Tuesday 13th August

  • Still getting a few positive's in pakiti. Does this need further follow-up?

Tuesday 30th July

  • One UK site recently appeared on the pakiti critical list.

Monday 22nd July

  • A summary of the SSC6 findings was circulated last week. Questions?


Services - PerfSonar dashboard | GridPP VOMS

Tuesday 13th August

  • PerfSONAR: The dashboard is showing more problems across various sites - presumably the monitoring?
  • VOMS: Using multiple VOMS servers in the ops portal (VO ID cards) requires careful use of designations.

Tuesday 23rd July

  • PerfSONAR: the issues with the WLCG mesh appear to be understood and a new minor release (e.g. 3.3.1) is likely to be released. In the meantime please could sites upgrade by following instructions here but leave the WLCG mesh URL (tests-wlcg-all.json) commented out. Please also update the site progress page.
  • Where are we with the VOMS rollout?

Monday 10th June

  • Issue with neurogrid.incf.org ownership. Is more guidance needed?
  • Where are we with the perfsonar mesh?
  • Are we ready for full rollout of the VOMS backups?


Tickets

Monday 12th August 2013, 14.00 BST</br> 55 Open UK tickets this week. Let's take a couple of deep breathes, then go through them all. Yep, all of them!

NGI tickets:

Unresponsive VOs. (5/7):</br> https://ggus.eu/ws/ticket_info.php?ticket=95442</br> Master Ticket, on hold.</br> https://ggus.eu/ws/ticket_info.php?ticket=95473 </br> The gridpp ticket, Jeremy is onto this, hoping to wrap it up soon. In progress (12/8) SOLVED</br> https://ggus.eu/ws/ticket_info.php?ticket=95472</br> minos ticket. The state of the minos VO is still unknown, but suspected to be defunct. This was set in progress by a ticket manager, although technically it still doesn't have a home. In progress (26/7)</br> https://ggus.eu/ws/ticket_info.php?ticket=95469</br> Supernemo ticket. Gianfranco has confirmed it's "not his problem" anymore, and given a few names to try to contact at UCL. In progress (29/7)

NGS decommissioning.</br> https://ggus.eu/ws/ticket_info.php?ticket=95833 ral-ngs2</br> https://ggus.eu/ws/ticket_info.php?ticket=96141 oxford-ngs2 </br> https://ggus.eu/ws/ticket_info.php?ticket=96128 manchester-ngs2</br> https://ggus.eu/ws/ticket_info.php?ticket=96538 NGS-SHEFFIELD</br> Nothing to see here really, on hold until JK back from leave) and it doesn't really affect us.

Other NGI tickets:</br> https://ggus.eu/ws/ticket_info.php?ticket=94780</br> Cloud Site Creation request. The NGI has been asked for an update, JK has asked others for feedback but is currently on leave. In progress (probably should be on hold if JK isn't back for a while) (5/8)


gLExec tickets. (1/7):</br> SUSSEX https://ggus.eu/ws/ticket_info.php?ticket=95309 Some progress. On hold (23/7)</br> CAMBRIDGE https://ggus.eu/ws/ticket_info.php?ticket=95306 Get to it in late summer. On hold (9/7)</br> BRISTOL https://ggus.eu/ws/ticket_info.php?ticket=95305 After the current work-pile is conquered. Also as an aside, going for an arc ce? Interesting. On hold (11/7)</br> BIRMINGHAM https://ggus.eu/ws/ticket_info.php?ticket=95304 Aim to do it in ~August, along with other upgrades. On hold (9/7)</br> ECDF https://ggus.eu/ws/ticket_info.php?ticket=95303 On hold. (1/7)</br> DURHAM https://ggus.eu/ws/ticket_info.php?ticket=95302 Some progress made, but things stalled. Should be on held if things don't pick up again. In progress (8/8)</br> SHEFFIELD https://ggus.eu/ws/ticket_info.php?ticket=95301 On hold (10/7)</br> MANCHESTER https://ggus.eu/ws/ticket_info.php?ticket=95300 Will do it in October upgrade. On hold (1/7)</br> LANCASTER https://ggus.eu/ws/ticket_info.php?ticket=95299 Trying to get it to work on the tarball. Not having much luck. On hold (17/7)</br> UCL https://ggus.eu/ws/ticket_info.php?ticket=95298 Won't start until end of August. On Hold (29/7)</br> RHUL https://ggus.eu/ws/ticket_info.php?ticket=95297 Another for the end of August. On hold (16/7)</br> QMUL https://ggus.eu/ws/ticket_info.php?ticket=95296 Almost there, just need to roll out SL6 to all their nodes. On hold (12/8)</br> EFDA-JET https://ggus.eu/ws/ticket_info.php?ticket=95295 Some confusion over Jet's status was had. Otherwise waiting until later to deploy this. On Hold (19/7)

SHA-2 (22/7)</br> ECDF https://ggus.eu/ws/ticket_info.php?ticket=96002 On hold (23/7)</br> DURHAM https://ggus.eu/ws/ticket_info.php?ticket=96001 Will upgrade in September. On hold (31/7)</br> MANCHESTER https://ggus.eu/ws/ticket_info.php?ticket=96081 Again in the October upgrade. On hold (23/7)</br> LANCASTER https://ggus.eu/ws/ticket_info.php?ticket=95999 Will do this week. On hold (12/8)</br> TIER 1 https://ggus.eu/ws/ticket_info.php?ticket=95996 In Progress, but not much news. Maybe should be On Held? In progress (22/7)</br> NEW 12/8 RALPP https://ggus.eu/ws/ticket_info.php?ticket=96588 Just assigned yesterday (12/8)

Common or Garden tickets:

SUSSEX</br> https://ggus.eu/ws/ticket_info.php?ticket=96469 (8/8)</br> An ops ticket for CREAMCE-JobSubmit failures. Not acknowledged yet. Assigned (8/8) In progress, Emyr reports the BDII config disappeared (auto update accident?).

https://ggus.eu/ws/ticket_info.php?ticket=96470 (8/8)</br> Another ops ticket, for the SRM-GetSURLs tests. Emyr has posted an explanation for the problems. In progress (9/8)

https://ggus.eu/ws/ticket_info.php?ticket=96556 (10/8)</br> Another, slightly younger, ops ticket. This test is CREAMCE-CertLifetime. The cert expired 3 days ago (or an old one has snuck back on the server - that's happened to me more then once). Assigned (10/8)

https://ggus.eu/ws/ticket_info.php?ticket=95165 (28/6)</br> Duncan has asked you to check your perfsonar - which might be being affected by the firewall work mentioned in 96470. But this ticket is looking mighty neglected. On hold - last "proper" update was (1/7)

RALPP</br> https://ggus.eu/ws/ticket_info.php?ticket=96287 (31/7) Atlas were seeing timeouts on their deletion service at RALPP. Alaistair noticed correlation between the times for these failures with those at the Tier 1 (96079). Chris asked if the errors were spread evenly or came in bursts - Brian posted some information that to me suggests bursts. In progress (6/8) Update, problem still persists at both T1 and T2

https://ggus.eu/ws/ticket_info.php?ticket=96531 (9/8)</br> Someone (lhcb? I recognise the submitter's name) has spotted 444444 jobs being advertised at RALPP. No news from the site yet, such is the peril of Friday tickets (especially over the summer). But of course you'll fix that as soon as you read this... Assigned (9/8) Update - not only acknowledge, but solved. lcg-info-dynamic-scheduler-pbs.noarch missing, screwed up dependencies somewhere?

OXFORD</br> https://ggus.eu/ws/ticket_info.php?ticket=96440 (6/8)</br> Actually a ticket for the nagios at Oxford. Chris W noticed ops tests making some odd requests, and noted the old ticket 70066 where he spotted atlas doing similar. Kashif is on the case. In progress (7/8)

BRISTOL</br> https://ggus.eu/ws/ticket_info.php?ticket=96261 (30/7)</br> A CMS user had trouble writing into a path at Bristol. Lukasz couldn't see anything wrong, and another user has written to the volume without error, so the submitter has been asked if he still sees a problem. No reply yet. Waiting for reply (5/8)

https://ggus.eu/ws/ticket_info.php?ticket=96483 (8/8)</br> Bristol had some obsolete glue 2 entries in their publishing. The Bristol team are on it. In progress (9/8)

BIRMINGHAM</br> https://ggus.eu/ws/ticket_info.php?ticket=95418 (4/7)</br> Alice, what's the matter? They'd like cvmfs installed at Birmingham. Due to the lack of urgency on this change Mark is leaving it until after the other stuff that needs to be done in this Summer of Upgrades. On hold (17/7)

https://ggus.eu/ws/ticket_info.php?ticket=96555 (10/8)</br> SRM-Put Ops test failures hitting Birmingham. Space has run out, Mark has his shoe horn out to create more but it will take a little while to sort out. In progress (12/8)

https://ggus.eu/ws/ticket_info.php?ticket=96533 (9/8)</br> LHCB have asked for g++ to be installed at Birmingham. Mark asked if this is urgent, and I think the LHCB reply can be summarised as "yes". In progress (9/8)

GLASGOW</br> https://ggus.eu/ws/ticket_info.php?ticket=96528 (9/8)</br> Glasgow also are having 444444 Waiting jobs on some of their shares. Gareth pointed out that the bad CEs are newer EMI ones - cream developers have been involved. In progress (12/8)

https://ggus.eu/ws/ticket_info.php?ticket=96234 (29/7)</br> Request to support the new HyperK VO on the Glasgow WMS. Glasgow would like to wait until the VO was supported on all the VOMS servers and the Operations Portal. Chris points out that it is supported on all the former. The latter is being a pain (is what I think the implication was). On hold (2/8)

https://ggus.eu/ws/ticket_info.php?ticket=96231 (29/7)</br> Sno+ have seen a lot of failures from jobs going through one of Glasgow's WMSii. The problem looks to have been ephemeral, but some zombie job clean up was needed. This was the end of July, Sno+ have been asked if they still have a problem. Waiting for reply (8/8)

ECDF</br> https://ggus.eu/ws/ticket_info.php?ticket=96331 (2/8)</br> Failing the ApelDN publishing ops tests. Turned out "publishGlobalUserName no" snuck into the new CE configuration. Just waiting for the republishing to soak in. In progress (12/8) Solved now.

DURHAM</br> https://ggus.eu/ws/ticket_info.php?ticket=96530 (9/8)</br> Another 444444 waiting jobs ticket. Not acknowledged yet though. Assigned (9/8) Update - In progress

https://ggus.eu/ws/ticket_info.php?ticket=96554 (10/8)</br> Ops CREAMCE-JobSubmit failures. Assigned (10/8)

MANCHESTER</br> https://ggus.eu/ws/ticket_info.php?ticket=96582 (12/8) An atlas user and the UK atlas team have spotted some files that they can't access at Manchester, in ATLASSCRATCHDISK. Assigned (12/8) Update- in progress, machines are back online. Problem with some network kit.

QMUL</br> https://ggus.eu/ws/ticket_info.php?ticket=94746 (10/6)</br> Biomed haunting the QM SE's information. Reinstalling the SE didn't kill off the entries, Storm-developers have been called in. On hold (31/7)

BRUNEL</br> https://ggus.eu/ws/ticket_info.php?ticket=96217 (29/7)</br> CMS have spotted that the bdii seems to be publishing inconsistent Wall/CPU time (2880 for one, 72 for the other, so one is in minutes, t'other is in hours). This is a known issue, fixed in EMI-3 (ticket 91859). Reading the ticket Raul doesn't intend to upgrade just to fix this until he's given it a proper testing. As he suggested, it's probably best to On Hold it until then. In progress (6/8)

EFDA-JET</br> https://ggus.eu/ws/ticket_info.php?ticket=96526 (9/8)</br> LHCB are seeing some 'certificate verify failed' errors at efda-jet. Not something I've seen before - CA certificate problems maybe? In progress (9/8)

And last but by no means least:

TIER 1</br> https://ggus.eu/ws/ticket_info.php?ticket=96482 (8/8)</br> CMS have noticed transfers from Caltech to RAL failing. Problem looks to be transient, Brian asked if retries also fail. Waiting for reply (8/8)

https://ggus.eu/ws/ticket_info.php?ticket=96235 (29/7)</br> Chris W has asked for a LFC for HyperK. In progress, but slowed by Vacations. In progress (9/8)

https://ggus.eu/ws/ticket_info.php?ticket=86152 (17/9/2012)</br> The oldest open ticket: "correlated packet-loss on perfsonar host". Last news was that the upgrade to the Tier 1 backbone/uplink was still in the planning stage. But is the original problem still there? On hold (17/6)

https://ggus.eu/ws/ticket_info.php?ticket=96321 (2/8)</br> The RAL SE is failing Sno+ nagios tests. Looks to be a problem with Kashif being mapped to t2k - problems seem to be authentication based (but what about ops tests - do they pass too? I smell a possible red herring). Waiting for reply (6/8)

https://ggus.eu/ws/ticket_info.php?ticket=96233 (29/7)</br> Request for HyperK support on the RAL WMS. In progress, but again Summer Vacations are slowing things down. In progress (9/8)

https://ggus.eu/ws/ticket_info.php?ticket=91658 (20/2)</br> Webdav support on the RAL LFC. Some good progress has been made by the looks of it, but again people going on well deserved, probably much needed holiday is slowing things down over the Summer. In progress (9/8)

Tools - MyEGI Nagios

Monday 12 Aug

  • VO-Nagios

t2wlcgnagios.physics.ox.ac.uk monitors few UK VO and it uses Robot Certificate. Due to some confusion Robot Certificate got expired. I applied for extension of Robot Certificate beforehand but Cert Wizard doesn't understand Robot Certificate and I thought that it has been extended. Finally Jens stepped in sorted it out. Now VO Nagios is working.

  • SHA2 Certificates

I have been issued a SHA2 certificate by Jens. I tested few CE's and some interesting results came out. Gridpp VOMS server is SHA2 compatible so SHA2 proxies can be created for VO's hosted at voms.gridpp.ac.uk. None of CERN voms server are sha2 compatible but there is workaround to add a secondary SHA2 certificate. Details are here https://twiki.cern.ch/twiki/bin/view/LCG/SHA2readinessTesting#SHA_2_VOMS_server I have added my SHA2 certificate but it is not approved yet as most of the people are on holiday. Interestingly when I submitted few jobs using ngs.ac.uk with SHA2 certificate, it finished successfully on the CE's which are not suppose to be SHA2 complaint. I will test again with OPS vo to confirm it.

Tuesday 23rd July

  • In a campaign to update VO ID card details it turns out that a few of our supported VOs are obsolete: babar, possibly supernemo and ngs.ac.uk. The first of these can be safely removed but we need to confirm our announcement process.
VOs - GridPP VOMS VO IDs Approved VO table

Monday 12 August

  • HyperK.org
    • VOMS servers set up (Manchester, Oxford, Imperial)
    • VOID card - stalled on a homepage.
    • WMS set up (Imperial) - awaiting Glasgow, Ral
    • Site set up (QMUL)
    • LFC - in progress
    • CVMFS - considering


  • SNO+
    • Dirac set up for some CEs
  • Epic
    • Doing stuff
  • ngs.ac.uk VO - any reason to keep it?
  • Software areas for SL6
    • Are we keeping the same areas as sl5?
    • What about the software tags?
    • Push CVMFS?

Friday 2 August 2013

  • SNO+ would like to streamline their submission
    • Is Dirac possible
  • WebDAV support at RAL LFC
    • Firewall seems to be in the way.
  • HyperK.org
    • Waiting on WMS support from somebody.
    • 1 month so far from starting this off - can we do this quicker next time.
Site Updates

Actions


Meeting Summaries
Project Management Board - MembersMinutes Quarterly Reports

Empty

GridPP ops meeting - Agendas Actions Core Tasks

Empty


RAL Tier-1 Experiment Liaison Meeting (Wednesday 13:30) Agenda EVO meeting

Wednesday 14th August

  • Operations report
  • There have been some problems with the cms-tape service class within Castor that are under investigation.
  • The test ARC-CEs have been enabled for more non-LHC VOs (now includes hone, biomed, mice, na62, superb, snoplus.)
  • We continue to work with Atlas on the testing of FTS3.
WLCG Grid Deployment Board - Agendas MB agendas

Empty



NGI UK - Homepage CA

Empty

Events

Empty

UK ATLAS - Shifter view News & Links

Empty

UK CMS

Empty

UK LHCb

Empty

UK OTHER
  • N/A
To note

  • N/A