Difference between revisions of "GDB 10th July 2013"

From GridPP Wiki
Jump to: navigation, search
 
(No difference)

Latest revision as of 13:36, 10 July 2013

See talks for slides themselves: https://indico.cern.ch/conferenceDisplay.py?confId=197806

"Welcome" - Michel Jouvin

  • Request for more people to take minutes/notes.
  • No GDB in August.
  • No pre-GDBs before December?

WLCG workshop proposal: usually co-located with CHEP, but not this time.

  • Will turn November GDB (Copenhagen, 12-13 Nov) into 2-day workshop
  • Talk/theme ideas invited

"Security of Collaborating Infrastructure" - Dave Kelsey

On proposal that SCI compliant wording is sufficient rather using specific wording:

  • Q. Prescriptive wording can be a problem (eg for OSG) so maybe a recommendation would be better?
    • A. Yes, this what the compliant wording approach seeks to address.
  • Q. How do you see the process inside WLCG?
    • A. No yet decided how far to go in a formal approval process.
  • Q. Traceability in US. Can do traceability via the overlay rather than at the infrastructure.
    • A. Need to be technology independent.

"DPM Collaboration: first months and 12 month roadmap" - Oliver Keeble

  • Q. How do you see interaction with VMs? And HTTP interface?
    • A. HTTP support for a bunch of reasons, including client neutrality.
  • Q. Staged rollout during EPEL testing. What after that? Did you talk to UMD?
    • A. EGI needs to approve idea of using EPEL testing during staged rollout. In first instance idea is to copy files from EPEL.
  • Q. May mean packages in EPEL test for longer?
    • A. Yes, but will mean takes less time to get into UMD.

"Summary of pre-GDB about clouds" - Speaker

See https://indico.cern.ch/conferenceDisplay.py?confId=253858 for yesterday's slides themselves

  • Q. Some sites need batch systems for other user groups. Economic approach would need to accommodate that.
    • A. Yes, not same solution for everyone. Group explores clouds for the sites that want to change.
  • Q. Much of cloud usage depends on pilot factories. What about Tier-0 work? Could that use clouds?
    • A1. Not discussed yesterday.
    • A2. For LHCb, the Tier-0 is nothing special so we use it as another grid site. So prompt could run on CERN cloud too. Don't think we should distinguish.

"Ops Coordination Report" - Andrea Sciaba

Call for input on monitoring topics (see slides). Tight timetable (~ this week).

"Potential Future Software Lifecycle Management Needs for WLCG" - Markus Schulz

  • Q. EPEL not under our control, but most of middleware maintained by WLCG people. Can block versions to avoid getting them. How long to take to test new versions?
    • A. Depends on how organise it: can take months. Need to invest to make sure bits that really matter are under our control.
  • Q. On the slide "Gaps", WLCG sites didn't contribute much to staged rollout, but the cases on that slide are taken care of already. Christina is doing WN/UI.
    • A. In EMI period had hard link so could not put into repo without going through testing system. Now it is trust based. Gaps for WLCG community: staged rollout does not verify common use cases work.
  • Q. On the slide "Issues", principle of UMD is to give sites stable software. Inherent cost in providing a stable release. When need a quick fix, there are procedures. Sites never complained about timing in UMD, but wanted stable software. UMD will be populated via EPEL or not. Not aware of operational problem at the moment.
    • A1. Might not be aware because sites take software from PT (eg DPM).
    • A2. Latency from problems and lack of resources, can change procedure so can start testing earlier and feed back to PT before gets to UMD testing.
    • A3. At the moment EMI repo is a bucket more or less before goes to UMD. Concern that in EPEL can do releases whenever you want. Continuous stage rollout and difficult to understand what is happening during that.
    • A4. Can collect several packages in a single update if desired.
    • A5. If have small fraction of sites with most common scenarios doing daily updates from EPEL-test or WLCG-test, then could monitor new releases all the time with real work. Would need effort to organise this, and resources from sites. Then could go to continuous approach. UMD staged rollout could be part of this.
    • A6. Another advantage would be security updates, which would mean continuous security updates would be validated against causing the system to fail.
    • A7. Sounds a bit like Pre-Production Service in EGEE. Experiments had problems with using that kind of resource.
    • A8. Reason the PPS wasn't use was that it was a completely separate infrastructure. Can also use cvmfs to distribute the versions for testing.
    • A9. Would be helpful to have a concrete list of actions about where we're going.
  • Q. Continuous verification infrastructure: is this something we can follow up in WLCG?
    • A. Need clear WLCG MB statement that resources should be given to this, and then operations follow up. Technical implementation in a TF?

"OSG Update" - Brian Paul Bockelman

  • Q. On "Security", implementation will be discussed next week. Argus technology. Translation of banned list for services/NGIs that don't use ARGUS. Timescale of couple of weeks for decision.
  • Q. Is someone from OSG involved with working party?
    • A1. Yes, some involvement. Currently not a technical mechanism for shipping this information to sites.
    • A2. Global banning won't be a requirement till technical possible.
  • Q. What about joint XACML/SAML work?
    • A. ARGUS doesn't implement interoperability profile.
  • Q. OSG CREAM CE evaluation?
    • A. June-July 2012. In the end focussed on Condor based CE.