Operations Bulletin 100912

From GridPP Wiki
Jump to: navigation, search

Bulletin archive


Week commencing 3rd September 2012
Task Areas
General updates

Friday 7th September

  • Discussion required on identifying the active Nagios instance.


Tuesday 4th September

  • Request to update networking information in the GridPP wiki
  • Non-LHC VOs and testing of EMI WNs (GridPP test clusters seem to prefer EMI-2 SL5)
  • gLite support calendar has been updated (see interoperation section) - extensions to 30th November.
  • GridPP list of Technology Development contributions
  • Upcoming GDB ( agenda). There is still a call for participation in the WLCG Operation Coordination Team and a kick-off meeting will be held on 20th September.
  • Reminder of EGI GPGPU questionnaire. The deadline is 13th September.


Tier-1 - Status Page

Tuesday 4th September

  • No major operational issues to report for the last week.
  • LHCb being moved to use the T10KC tapes today.
  • Continuing test of hyperthreading.
  • A further ten worker nodes, in normal production, have been installed with EMI-2 SL-5. These have been selected from a variety of different batches of hardware.
  • As stated before: CVMFS available for testing by non-LHC VOs (including "stratum 0" facilities).
Storage & Data Management - Agendas/Minutes

Wednesday 29th August

  • Update on DPM support plan - aim to define tasks, then look for "volunteers"
  • Planning ahead for coming events - particularly GridPP29
  • Volunteers for ATLAS job recovery?

Wednesday 15th August

  • Tier-1 approach for disk only solution.
  • Considering input to a community support model for DPM and possible alternatives.

Tuesday 24th July

  • Sam testing xrootd redirection on Glasgow test cluster - going well.


Accounting - UK Grid Metrics HEPSPEC06

Wednesday 18th July - Core-ops

  • Still need definitive statement on disk situation and SL ATLAS accounting conclusions.
  • Sites should again check Steve's HS06 page.


Wednesday 6th June - Core-ops

  • Request sites to publish HS06 figures from new kit to this page.
  • Please would all sites check the HS06 numbers they publish. Will review in detail on 26th June.

Friday 11th May - HEPSYSMAN

  • Discussion on HS06 reminding sites to publish using results from 32-bit mode benchmarking. A reminder for new kit results to be posted to the HS06 wiki page. See also the blog article by Pete Gronbech. The HEPiX guidelines for running the benchmark tests are at this link.


Documentation - KeyDocs

Tuesday 4th September

Significantly better than before now:

KeyDocs monitoring status: Grid Storage(7/0) Documentation(3/0) On-duty coordination(3/0) Staged rollout(3/0) Ticket follow-up(3/0) Regional tools(3/0) Security(3/0) Accounting(3/0) Core Grid services(3/0) Wider VO issues(3/0) Grid interoperation(3/0) Monitoring(2/0) Cluster Management(1/0) (brackets show total/missing)

Thursday 26th July

All the "site update pages" have been reconfigured from a topic oriented structure into a site oriented structure. This is available to view at https://www.gridpp.ac.uk/wiki/Separate_Site_Status_Pages#Site_Specific_Pages

Please do not edit these pages yet - any changes would be lost when we refine the template. Any comments gratefully received, contact: sjones@hep.ph.liv.ac.uk

Interoperation - EGI ops agendas

Tuesday 4th September

  • The end of security support of the following products:

- glite 3.2 glite-UI - glite 3.2 glite-WN - glite 3.2 glite-GLEXEC_wn - glite 3.2 glite-LFC_mysql/glite-LFC_oracle - glite 3.2 glite-SE_dpm_disk/glite-SE_dpm_mysql

was extended to 30/11/2012 (http://glite.cern.ch/support_calendar/).


Monday 20th August

  • EGI operations meeting minutes available.
  • SAM/Nagios 17.1 release (needed for EMI2 WN monitoring) expected around 22nd August (but requires a workaround!).
  • Next meeting 10th September


Monday 30th July (last EGI ops meeting agenda.)

  • Check VO's who have tested/used EMI WN's. There is a need to avoid any problems when gLite 3.2 WN's End Of Life are announced.
  • EMI: Forthcomming, 9th August: EMI-1: BDII, CREAM, WMS. EMI-2: BDII, Trustmanager. The CREAM and WMS updates look like the fix some useful bits.
  • StagedRollout:

IGTF: CA 1.49, SR this week.

The SAM/Nagios 17 still in SR, has problems. A patch is ready, shoudl be done soon. This is needed for support of EMI-2 WN's.

UMD1.8: Blah. gsoap-gss, Storm and IGE Gridftp to be released soon.

Lots of stuff for the UMD-2; note the WN problems. There is some discussion, it looks like the EMI-2 WN will not be released to the UMD until the Sam/Nagios problems are solved.

  • WMS vunerabilities: Some discussion. UK all patched and up-to-date, yay!


Monitoring - Links MyWLCG

Monday 2nd July

  • DC has almost finished an initial ranking. This will be reviewed by AF/JC and discussed at 10th July ops meeting

Wednesday 6th June

  • Ranking continues. Plan to have a meeting in July to discuss good approaches to the plethora of monitoring available.
  • Glasgow dashboard now packaged and can be downloaded here.
On-duty - Dashboard ROD rota

Monday 3rd September

  • Some issues with ROD handover last week. We need to agree a handshake (in conjunction with report perhaps - AM).
  • John W is on-duty this week.
  • A new rota needs to be created for beyond September.


Monday 27th August

  • COD believes Brunel has been in downtime for a month - we need to respond to the ticket.

Friday 24th August - AM

  • Quite a patchy week with many problems at sites. Several still have unplanned downtimes due to outages, with varying degrees of severity. The WMS at Imperial continues to be unstable (due to overload?) and is awaiting reinstallation on a new machine, and the Dashboard has been particularly slow at reflecting the IC WMS's returns to proper operation. I've created another ticket to track these alarms. Brunel has had several alarms but none of which have lasted more than 12 hours or so, and yet today it is showing as red. To confirm the inconsistency, there are no thresholds exceeded for Brunel or NGI_UK in the metrics and performance index pages in the Dashboard. If this red state persists, I think it should be reported as a Dashboard bug.
Rollout Status WLCG Baseline

Monday 3rd September

  • Test queues for EMI WNs: RAL T1, Oxford, Liverpool?, Brunel

Tuesday 31st July

  • Brunel has a test EMI2/SL6 cluster almost ready for testing - who (smaller VOs?) will test it with their software.

Wednesday 18th July - Core ops

  • Sites (that needed a tarball install) will need to work on own glexec installs
  • Reminder that gLite 3.1 no longer supported. 3.2 support is also decreasing. Need to push for EMI.
Security - Incident Procedure Policies Rota

Monday 30th July

  • WMSes patched/configured correctly.

Monday 23rd July

  • WMS vulnerabilities identified. Sites will have been contacted. Please respond to tickets ASAP.


Services - PerfSonar dashboard

Tuesday 4th September

  • There was a CA TAG meeting last Tuesday
  • There is an issue with UK certificates and CERN SSO that is being investigated.
  • To prevent CRLs failing to validate and causing alarms/errors, the old CA certificate lifetime has been extended until March 2013, but no new certificates will be generated with it. This update will be in a September IGTF release. "No person who has a certificate under the old CA will need to do anything special or unusual. No site will need to do anything special or unusual. The purpose of the rollover is to move away from the 2007 key that is hosted in an old signing module for which support will end.


Tuesday 28th August

  • 14 GridPP sites now enabled with perfSONAR. Are they all reporting correctly in the GridPP community?
  • Some asymmetry issues observed. (A case study on resolving a US problem).
  • Setting up representative UK-international tests.


Tickets

Monday 3rd September 14:30 BST.</br> 37 Open UK tickets this week. It's the start of the month so it's time for a deep review.

Anyone else not been receiving their ticket reminders? I haven't for several Lancaster tickets.


UK</br> https://ggus.eu/ws/ticket_info.php?ticket=84408 (20/7)</br> Setting up of neurogrid.incf.org WMS & LFC. Both have been put in place, Catalin wonders if the LFC can be tested? Waiting for reply (29/8)</br> https://ggus.eu/ws/ticket_info.php?ticket=80259 (14/3)</br> neurogrid.incf.org creation ticket. Nearly finished now. In Progress (29/8)

https://ggus.eu/ws/ticket_info.php?ticket=68853 (22/3/11)</br> Brian's ticket to track older DPMs in the UK. Still have Durham, Bristol and Brunel to go at last update (but Brunel are retiring their old SE). On Hold (30/7)

https://ggus.eu/ws/ticket_info.php?ticket=84381 (19/7)</br> Setting up the COMET VO. Registering in EU Ops Portal (ticket 85736), On hold till this is done (3/9).

https://ggus.eu/ws/ticket_info.php?ticket=82492 (24/5)</br> Chris' ticket to change the reminder periods for the GridPP VOMS server. Assigned to Rober Frank, On Hold during VOMS transition (28/8)

TIER 1</br> https://ggus.eu/ws/ticket_info.php?ticket=85438 (23/8)</br> atlas were seeing FTS transfer failures from RAL. Some files have been corrupted, may have to get replacements from tape. Waiting for Reply (31/8)

https://ggus.eu/ws/ticket_info.php?ticket=85077 (13/8)</br> Biomed were seeing their nagios tests fail to register files at RAL, but looks to be a (peculiar) problem with their SAM jobs. Other units are involved. In Progress (3/9).

https://ggus.eu/ws/ticket_info.php?ticket=85023 (9/8)</br> SNO+ having troubles with one of the RAL WMSi. No reply after request to attempt job submission to lcgwms02. Waiting for Reply (10/8)

https://ggus.eu/ws/ticket_info.php?ticket=84492 (24/7)</br> SNO+ having job-matching problems at RAL. Some odd behaviour, but In Progress (31/8)

GLITE 3.1 Upgrade tickets (14/8):</br> https://ggus.eu/ws/ticket_info.php?ticket=85189 (UCL) In Progress (29/8)</br> https://ggus.eu/ws/ticket_info.php?ticket=85185 (CAMBRIDGE) In Progress (29/8)</br> https://ggus.eu/ws/ticket_info.php?ticket=85183 (GLASGOW) On hold (14/8)</br> https://ggus.eu/ws/ticket_info.php?ticket=85181 (DURHAM) In Progress (On hold?) (14/8)</br> https://ggus.eu/ws/ticket_info.php?ticket=85179 (Brunel) In Progress (22/8)

UK/SAM/GOCDB</br> https://ggus.eu/ws/ticket_info.php?ticket=85449 (23/8)</br> Bristol canceled an ongoing downtime but weren't bought out of it by the system, thus penalising them. Winnie is out to find the cause of the problem, and get back the lost uptime. Reset to "In Progress" after some ticket tennis (3/9)

PHENO/BRUNEL</br> https://ggus.eu/ws/ticket_info.php?ticket=85011 (28/8)</br> Pheno seem to be surprised that they have data on the retiring Brunel SE. In Progress (28/8)

SUSSEX</br> https://ggus.eu/ws/ticket_info.php?ticket=81784 (1/5)</br> The Sussex Certification Chronicle. Jeremy wants to push getting Sussex out of downtime this week to avoid having to re-certify. In Progress (3/9)

UCL</br> https://ggus.eu/ws/ticket_info.php?ticket=85467 (24/8)</br> Atlas transfer errors to UCL. Clock skew on the head node took some of the blame, but seeing more failures with "Error reading token data header" messages.In Progress (30/8)

https://ggus.eu/ws/ticket_info.php?ticket=85549 (28/8)</br> Last of the User DN accounting tickets (the last child of 85547). In Progress (28/8)

DURHAM</br> https://ggus.eu/ws/ticket_info.php?ticket=85679 (31/8)</br> se01 failing Ops tests.</br> https://ggus.eu/ws/ticket_info.php?ticket=85731 (3/9)</br> ce01 failing APEL Pub tests.</br>

https://ggus.eu/ws/ticket_info.php?ticket=84123 (11/7)</br> atlas production failures. On hold as Mike expects slow progress (3/9).</br> https://ggus.eu/ws/ticket_info.php?ticket=83950 (7/7)</br> lhcb cvmfs errors. On hold (7/8)

https://ggus.eu/ws/ticket_info.php?ticket=68859 (22/3/11)</br> SE Upgrade ticket. Probably should be On Hold. (28/8).</br>

https://ggus.eu/ws/ticket_info.php?ticket=75488 (19/10/2011)</br> CompChem job failures at Durham. On hold due to the other problems, but once out of the woods worth checking that the problem persists. (8/8).

GLASGOW</br> https://ggus.eu/ws/ticket_info.php?ticket=85025 (9/8)</br> SNO+ were having problems with one of the Glasgow WMSs (twinned ticket to 85023). Stuart asked for the FQAN used for the jobs as the problems seemed voms related, but no news since. Waiting for Reply (10/8)

https://ggus.eu/ws/ticket_info.php?ticket=83283 (14/6)</br> LHCB seeing high rate of job failures, likely to be caused by cvmfs. Glasgow upgraded all their nodes to the latest cvmfs but failures are still seen on the "high-core" nodes, correlated with high numbers of atlas job start up. Investigation continues. In Progress (30/8)

OXFORD</br> https://ggus.eu/ws/ticket_info.php?ticket=85496 (25/8)</br> LHCB has job failures, that were not cvmfs related (they reckoned a lack of 32-bit gcc rpms or some OS difference). Problem seemed to evaporated though, did anything change. In progress, probably can be closed (31/8)

IC</br> https://ggus.eu/ws/ticket_info.php?ticket=85524 (27/8)</br> Hone had problems submitting jobs through the Imperial WMS' due to "System load is too high" errors. Some magic was worked, and Hone see a massive improvement ahd propose to close the ticket. Can be closed (31/8).

LANCASTER (to my shame)</br> https://ggus.eu/ws/ticket_info.php?ticket=85412 (22/8)</br> JobSubmit tests failing to one of Lancaster's CEs. With help from LCG-SUPPORT tracked to a desync between ICE on the WMS & the CREAM. Best solution is cream reinstall, which is undergoing planning. On hold (3/9)

https://ggus.eu/ws/ticket_info.php?ticket=85367 (20/8)</br> Lancaster's other CE isn't working well for ILC. Would like to reinstall, but will wait until ticket 85412 is solved. On hold (3/9)

https://ggus.eu/ws/ticket_info.php?ticket=84583 (26/7)</br> Similarly LHCB are having problems on the same node. Lancaster is suffering a ticket pileup. On hold (3/9)

https://ggus.eu/ws/ticket_info.php?ticket=84461 (23/7)</br> T2K transfers fail from RAL to Lancaster. Looks to be a networking problem. With new routing to be put in place soon hopefully this problem will disappear, as it has eluded understanding. On hold (3/9)

BRISTOL</br> https://ggus.eu/ws/ticket_info.php?ticket=85286 (17/8)</br> CMS transfers to Bristol failing. Winnie tracked to a maxed out datalink. In Progress (20/8)</br> https://ggus.eu/ws/ticket_info.php?ticket=80155 (12/3/11)</br> SE upgrade ticket. Bristol are prepping for the upgrade, with a test server. On hold (17/8)

RALPP</br> https://ggus.eu/ws/ticket_info.php?ticket=85019 (9/8)</br> ILC were having problems running jobs at RALPP. Needed a lot of configuration work, but progress made. In Progress (23/8)

RHUL</br> https://ggus.eu/ws/ticket_info.php?ticket=83627 (27/6)</br> Biomed seeing negative published space. Repeat of ticket 81439. Despite great efforts this remains so far unsolved. On hold (31/8)

No exciting tickets from the UK or solved UK tickets that I can see this week (which seems to be very often the case which makes me suspect I'm missing something!).

Tools - MyEGI Nagios

Tuesday 25th July

Gridppnagios at Lancaster will remain main Nagios instance until further announcement. KM writing down procedure for switch over to backup nagios in case of emergency https://www.gridpp.ac.uk/wiki/Backup_Regional_Nagios . KM now away for one month holiday and may not be able to reply to emails. New email address for Nagios: gridppnagios-admin at physics.ox.ac.uk for any question or information regarding regional nagios. Currently this mail goes to Ewan and Kashif.

Monday 2nd July

  • Switched on backup Nagios at Lancaster and stopped Nagios instance at Oxford. Stopping Nagios instance at Oxford means that it is not sending results to the dashboard and central DB. Keeping a close eye on it and will revert it back to original position if any problems encountered.



VOs - GridPP VOMS VO IDs Approved VO table

Monday 27th August


Monday 23rd July

  • CW requested feedback from non-LHC VOs on issues
  • Proxy renewal issues thought to be resolved on all services except FTS - a new version may solve problems on that service.
  • Steve's VOMS snooper application is picking up many site VO config problems. We should review all sites in turn.


Site Updates

Monday 3rd September

  • SUSSEX: Still in downtime. This week?



Meeting Summaries
Project Management Board - MembersMinutes Quarterly Reports

Monday 2nd July

  • No meeting. Next PMB on Monday 3rd September.
GridPP ops meeting - Agendas Actions Core Tasks

Tuesday 21st August - link Agenda Minutes

  • TBC


RAL Tier-1 Experiment Liaison Meeting (Wednesday 13:30) Agenda EVO meeting

Wednesday 5th September

  • Operations report
  • LHCb have been switched from the T10KA to T10KC tapes.
  • Testing of the Castor version 2.1.12 update is around 80% complete and we anticipate being ready to make this update within a few weeks.
  • Test batch queue ("gridTest") available to try out EMI2/SL5 Worker nodes. In addition a further ten nodes (one from each hardware generation/batch) have been re-installed with EMI-2/SL5 and are running as part of the normal batch system.
WLCG Grid Deployment Board - Agendas MB agendas

July meeting Wednesday 11th July

Welcome (Michel Jouvin) • September meeting to include IPv6, LS1 and extended run plans • EMI-2 WN testing also in September

CVMFS deployment status (Ian Collier) • Recap; 78/104 sites for ATLAS – the UK looks good thanks to Alessandra • Using two repos for ATLAS. Local shared area will be dropped in future. • 36/86 for LHCb. Two WN mounts. Pref for CVMFS – extra work • 5 T2s for CMS. Info for sites https://twiki.cern.ch/twiki/bin/view/CMSPublic/CompOpsCVMFS • Client with shared cache in testing • Looking at NFS client and MAC OS X

Pre-GDB on CE Extensions (Davide Salomoni) • https://indico.cern.ch/conferenceDisplay.py?confId=196743 • Goal – review proposed extensions + focus on whole-node/multi-core set • Also agree development plan + timeline for CEs. • Fixed cores and variable number of cores + mem requirements. May impact expt. frameworks. • Some extra attributes added in Glue 2.0 – e.g. MaxSlotsPerJob • JDL. Devel. Interest. Queue level or site level. • How. CE implementations. Plan. Actions.

Initial Meeting with EMI, EGI and OSG (Michel Jouvin) • ID issues related to end of supporting projects (e.g. EMI) • Globus (community?); EMI MW (WLCG); OSG; validation • Discussion has not included all stakeholders.

How to identify the best top level BDIIs (Maria Alandes Pradillo) • Only 11% are “properly” configured (LCG_GFAL_INFOSYS 1,2,3) • UK BDIIs appear in top 20 of ‘most configured’.

MUPJ – gLEexec update (Maarten Litmaath) • ‘glexec’ flag in GOCDB for each supporting CE • • http://cern.ch/go/PX7p (so far… T1, Brunel, IC-HEP, Liv, Man, Glasgow, Ox, RALPP) Improved instructions: https://twiki.cern.ch/twiki/bin/view/LCG/GlexecDeployment • CMS ticketing sites. Working on GlideinWMS.

WG on Storage Federations (Fabrizio Furano) • Federated access to data – clarify what needs supporting • ‘fail over’ for jobs; ‘repair mechanisms’; access control • So far XROOTD clustering through WAN = natural solution • Setting up group.

DPM Collaboration – Motivation and proposal (Oliver Keeble) • Context. Why. Who… • UK is 3rd largest user (by region/country) • Section on myths: DPM has had investment. Not only for small sites… • New features: HTTP/WebDAV, NFSv4.1, Perfsuite… • Improvements with xrootd plugin • Looking for stakeholders to express interest … expect proposal shortly • Possible model: 3-5 MoU or ‘maintain’

Update on SHA-2 and RFC proxy support • IGTF wish CAs -> SHA-2 signatures ASAP. For WLCG means use RFC in place of current Globus legacy proxies. • dCache & BestMan may look at EMI Common Authentication Library (CANL) – supports SHA-2 with legacy proxies. • IGTF aim for Jan 2013 (then takes 395 days for SHA-1 to disappear) • Concern about timeline (LHC run now extended) • Status: https://twiki.cern.ch/twiki/bin/view/LCG/RFCproxySHA2support • Plan deployed SW support RFC proxies (Summer 2013) and SHA-2 (except dCache/BeStMan – Summer 2013). Introduce SHA-2 CAs Jan 2014. • Plan B – short-lived WLCG catch-all CA

ARGUS Authorization Service (Valery Tschopp) • Authorisation examples & ARGUS motivation (many services, global banning, policies static). Can user X perform action Y on resource Z. • ARGUS built on top of a XACML policy engine • PAP = Policy Administration Point. Tool to author policies. • PDP = Policy Decision Point (evaluates requests) • PEP = Policy Execution Point (reformats requests) • Hide XACML with Simplified Policy Language (SPL) • Central banning = Hierarchical policy distribution • Pilot job authorization – gLEexec executes payload on WN https://twiki.cern.ch/twiki/bin/view/EGEE/AuthorizationFramework

Operations Coordination Team (Maria Girone) • Mandate – addresses needs in WLCG service coordination recommendations & commissioning of OPS and Tools. • Establish core teams of experts to validate, commission and troubleshoot services. • Team goals: understand services needed; monitor health; negotiate configs; commission new services; help with transitions. • Team roles: core members (sites, regions, expt., services) + targeted experts • Tasks: CVMFS, Perfsonar, gLEexec

Jobs with High Memory Profiles • See expt reports.



NGI UK - Homepage CA

Wednesday 22nd August

  • Operationally few changes - VOMS and Nagios changes on hold due to holidays
  • Upcoming meetings Digital Research 2012 and the EGI Technical Forum. UK NGI presence at both.
  • The NGS is rebranding to NES (National e-Infrastructure Service)
  • EGI is looking at options to become a European Research Infrastructure Consortium (ERIC). (Background document.
  • Next meeting is on Friday 14th September at 13:00.
Events

WLCG workshop - 19th-20th May (NY) Information

CHEP 2012 - 21st-25th May (NY) Agenda

GridPP29 - 26th-27th September (Oxford)

UK ATLAS - Shifter view News & Links

Thursday 21st June

  • Over the last few months ATLAS have been testing their job recovery mechanism at RAL and a few other sites. This is something that was 'implemented' before but never really worked properly. It now appears to be working well and saving allowing jobs to finish even if the SE is not up/unstable when the job finishes.
  • Job recovery works by writing the output of the job to a directory on the WN should it fail when writing the output to the SE. Subsequent pilots will check this directory and try again for a period of 3 hours. If you would like to have job recovery activated at your site you need to create a directory which (atlas) jobs can write too. I would also suggest that this directory has some form of tmp watch enabled on it which clears up files and directories older than 48 hours. Evidence from RAL suggest that its normally only 1 or 2 jobs that are ever written to the space at a time and the space is normally less than a GB. I have not observed more than 10GB being used. Once you have created this space if you can email atlas-support-cloud-uk at cern.ch with the directory (and your site!) and we can add it to the ATLAS configurations. We can switch off job recovery at any time if it does cause a problem at your site. Job recovery would only be used for production jobs as users complain if they have to wait a few hours for things to retry (even if it would save them time overall...)
UK CMS

Tuesday 24th April

  • Brunel will be trialling CVMFS this week, will be interesting. RALPP doing OK with it.
UK LHCb

Tuesday 24th April

  • Things are running smoothly. We are going to run a few small scale tests of new codes. This will also run at T2, one UK T2 involved. Then we will soon launch new reprocessing of all data from this year. CVMFS update from last week; fixes cache corruption on WNs.
UK OTHER

Thursday 21st June - JANET6

  • JANET6 meeting in London (agenda)
  • Spend of order £24M for strategic rather than operational needs.
  • Recommendations to BIS shortly
  • Requirements: bandwidth, flexibility, agility, cost, service delivery - reliability & resilience
  • Core presently 100Gb/s backbone. Looking to 400Gb/s and later 1Tb/s.
  • Reliability limited by funding not ops so need smart provisioning to reduce costs
  • Expecting a 'data deluge' (ITER; EBI; EVLBI; JASMIN)
  • Goal of dynamic provisioning
  • Looking at ubiquitous connectivity via ISPs
  • Contracts were 10yrs wrt connection and 5yrs transmission equipment.
  • Current native capacity 80 channels of 100Gb/s per channel
  • Fibre procurement for next phase underway (standard players) - 6400km fibre
  • Transmission equipment also at tender stage
  • Industry engagement - Glaxo case study.
  • Extra requiements: software coding, security, domain knowledge.
  • Expect genome data usage to explode in 3-5yrs.
  • Licensing is a clear issue
To note

Tuesday 26th June