Difference between revisions of "Tier1 Operations Report 2014-04-02"

From GridPP Wiki
Jump to: navigation, search
m ()
()
 
(20 intermediate revisions by one user not shown)
Line 9: Line 9:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the fortnight 19th March to 2nd April 2014.
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the fortnight 19th March to 2nd April 2014.
 
|}
 
|}
* On Wednesday early evening (12th March) there was a failure of the primary link to CERN between 17:00 and 19:00. Traffic flowed over the backup link. However, the failover was not clean and during this time we were failing the VO SUM tests.
+
* There was a short (around 5 minute) break in external connectivity to the Tier1 during the morning of Thursday 20th March and again a similar event the following morning.
* There was a problem with one of the FTS2 agent systems in the early hours of Thursday 13th March. Owing to a configuration error the hypervisor hosting this virtual machine rebooted and this particular system was not configured to re-start. This was resolved by the primary on-call.
+
* There was a failover of an Atlas Castor Database early evening on Tuesday 25th March. The failover triggered a call-out and the database was put back onto its allocated node. The cause is a bug that has been reported to Oracle.
 +
* On Friday, 28th March, we were not running some of the CE SUM tests in a timely manner. It was found that owing to a separate change in the Condor configuration we were no longer prioritising the test jobs. This was fixed.
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- ***********End Review of Issues during last week*********** ----->
 
<!-- *********************************************************** ----->
 
<!-- *********************************************************** ----->
Line 21: Line 22:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Resolved Disk Server Issues
 
|}
 
|}
* None
+
* Newly deployed server GDSS721 (LHCbDst - D1T0) was taken back out of service following problems on 20th March. It was put back in production the following day after the problems were traced to errors in the network routing table and fixed.
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***********End Resolved Disk Server Issues*********** ----->
 
<!-- ***************************************************** ----->
 
<!-- ***************************************************** ----->
Line 32: Line 33:
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Current operational status and issues
 
|}
 
|}
* There have been problems with the CMS Castor instance through the last week. These are triggered by high load on CMS_Tape - with all the disk servers that provide the cache for this ervice class running flat out (as far as network connectivity goes). Work is underway to increase the throughput of this disk cache.
+
* There have been problems with the CMS Castor instance in recent weeks. These are triggered by high load. Work is underway to alleviate these problems, in particular servers with faster network connections will be moved into the disk cache in front of CMS_Tape when they become available.
 
* The Castor Team are now able to reproduce the intermittent failures of Castor access via the SRM that has been reported in recent weeks. Understanding of the problem is significantly adcanced and further investigations are ongoing using the Castor Preprod instance. Ideas for a workaround are being developed.
 
* The Castor Team are now able to reproduce the intermittent failures of Castor access via the SRM that has been reported in recent weeks. Understanding of the problem is significantly adcanced and further investigations are ongoing using the Castor Preprod instance. Ideas for a workaround are being developed.
* The problem of full Castor disk space for Atlas has been eased. Working with Atlas the file deletion rate has been somewhat improved. However, there is still a problem that needs to be understood.  
+
* As reported before, working with Atlas the file deletion rate was somewhat improved. However, there is still a problem that needs to be understood.  
* Around 50 files in tape backed service classes (mainly in GEN) have been found not to have migrated to tape. This is under investigation. The cause for some of these is understood (a bad tape at time of migration). CERN will provide a script to re-send the remaining ones to tape.
+
* Around 50 files in tape backed service classes (mainly in GEN) have been found not to have migrated to tape. This is under investigation. The cause for some of these is understood (a bad tape at time of migration).
 +
* Problems with the infrastructure used to host many of our non-Catsor services have largely been worked around, although not yet fixed. Some additional migrations of VMs has been necessary.
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- ***********End Current operational status and issues*********** ----->
 
<!-- *************************************************************** ----->
 
<!-- *************************************************************** ----->
Line 46: Line 48:
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
| style="background-color: #f8d6a9; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Ongoing Disk Server Issues
 
|}
 
|}
* None
+
* GDSS239 (Atlas HotDisk) crashed this morning. This is being investigated.
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ***************End Ongoing Disk Server Issues**************** ----->
 
<!-- ************************************************************* ----->
 
<!-- ************************************************************* ----->
Line 52: Line 54:
 
====== ======
 
====== ======
 
<!-- ******************************************************************** ----->
 
<!-- ******************************************************************** ----->
<!-- *************Start Notable Changes made this last three weeks************** ----->
+
<!-- *************Start Notable Changes made this last week************** ----->
 
{| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;"
 
{| width="100%" cellspacing="0" cellpadding="0" style="background-color: #ffffff; border: 1px solid silver; border-collapse: collapse; width: 100%; margin: 0 0 1em 0;"
 
|-
 
|-
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last week.
+
| style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Notable Changes made this last fortnight.
 
|}
 
|}
* The move of the Tier1 to use the new site firewall took place on Monday 17th March between 07:00 and 07:30. FTS (2 & 3) services were drained and stopped during the change. The batch system was also reconfigured such that new batch jobs world not startt during this period. The change was successful. There was a routing problem that affected the LFC in particular and external access from many worker nodes but that was fixed in around an hour.
+
* The rollout of of WNs updated to the EMI-3 version of WN continues and is expected to be completed this week.
* One batch of WNs now updated to EMI-3 version of WN a week ago. So far so good.
+
* The EMI3 Argus server is being rolled out for use across all CEs and WNs.
* The EMI3 Argus server is in use for most of the CEs and one batch of worker nodes.
+
* The old MyProxy server (lcgrbp01.gridpp.rl.ac.uk) has just been turned off today. Its replacement (myproxy.gridpp.rl.ac.uk) is in production.
* The planned and announced UPS/Generator load test scheduled for this morning (19th March) was cancelled.
+
* The 2013 purchases of worker nodes are being added to the farm this week.
 +
* Two of the CV2013 disk servers (120TB each) have been added to LHCbDst. A further 9 are being added today. Three further servers are in CMS non-prod awaiting being moved into production imminently.
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- *************End Notable Changes made this last week************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 72: Line 75:
 
|}
 
|}
 
<!-- ******* Declared in the GOC DB ******* ----->
 
<!-- ******* Declared in the GOC DB ******* ----->
* None
+
{| border=1 align=center
 +
|- bgcolor="#7c8aaf"
 +
! Service
 +
! Scheduled?
 +
! Outage/At Risk
 +
! Start
 +
! End
 +
! Duration
 +
! Reason
 +
|-
 +
| srm-lhcb-tape.gridpp.rl.ac.uk
 +
| UNSCHEDULED
 +
| WARNING
 +
| 03/04/2014 08:00
 +
| 03/04/2014 09:30
 +
| 1 hour and 30 minutes
 +
| Warning during further testing of new tape interface (ACSLS),
 +
|-
 +
| lcgrbp01.gridpp.rl.ac.uk
 +
| SCHEDULED
 +
| OUTAGE
 +
| 02/04/2014 12:00
 +
| 01/05/2014 12:00
 +
| 29 days,
 +
| System be decommissioned. (Replaced my myproxy.gridpp.rl.ac.uk).
 +
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- ****************************************************************** ----->
 
<!-- ****************************************************************** ----->
Line 86: Line 114:
 
|}
 
|}
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 
<!-- ******* still to be formally scheduled and/or announced ******* ----->
 +
* The installion of the new Tier1 Routing layer and the change in the way the Tier1 connects to the RAL network is expected to take place in one of the two weeks following Easter.
 
'''Listing by category:'''
 
'''Listing by category:'''
 
* Databases:
 
* Databases:
 
** Switch LFC/FTS/3D to new Database Infrastructure.
 
** Switch LFC/FTS/3D to new Database Infrastructure.
 
* Castor:
 
* Castor:
** Castor 2.1.14 testing is largely complete. We are starting to look at possible dates for rolling this out (probably around April).
+
** Castor 2.1.14 testing is largely complete. (A non-Tier1 production Castor instance was successfully upgraded yesterday, 1st April.) We are starting to look at possible dates for rolling this out (probably around May).
 
* Networking:
 
* Networking:
 
** Update core Tier1 network and change connection to site and OPN including:
 
** Update core Tier1 network and change connection to site and OPN including:
Line 109: Line 138:
 
| style="background-color: #7c8aaf; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Entries in GOC DB starting between the 19th March and 2nd April 2014.
 
| style="background-color: #7c8aaf; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Entries in GOC DB starting between the 19th March and 2nd April 2014.
 
|}
 
|}
 +
 
{| border=1 align=center
 
{| border=1 align=center
 
|- bgcolor="#7c8aaf"
 
|- bgcolor="#7c8aaf"
Line 119: Line 149:
 
! Reason
 
! Reason
 
|-
 
|-
| Whole Site
+
| lcgrbp01.gridpp.rl.ac.uk,
 +
| SCHEDULED
 +
| OUTAGE
 +
| 02/04/2014 12:00
 +
| 01/05/2014 12:00
 +
| 29 days,
 +
| System be decommissioned. (Replaced my myproxy.gridpp.rl.ac.uk).
 +
|-
 +
| All Castor endpoints (All SRMs)
 
| SCHEDULED
 
| SCHEDULED
 
| WARNING
 
| WARNING
| 17/03/2014 07:00
+
| 01/04/2014 09:00
| 17/03/2014 17:00
+
| 01/04/2014 11:00
| 10 hours
+
| 2 hours
| Site At Risk during and following change to use new firewall.
+
| Testing of new interface to the tape library. During this time Castor disk services will remain up but there will be no tape access. Tape recalls will stall. Writes to tape backed service classes will carry on, with files flushed from the disk caches to tape once the testing is completed.  
 
|-
 
|-
| lcgfts.gridpp.rl.ac.uk, lcgfts3.gridpp.rl.ac.uk
+
| srm-lhcb-tape.gridpp.rl.ac.uk,  
 
| SCHEDULED
 
| SCHEDULED
 
| OUTAGE
 
| OUTAGE
| 17/03/2014 06:00
+
| 01/04/2014 09:00
| 17/03/2014 09:00
+
| 01/04/2014 11:00
| 3 hours
+
| 2 hours
| Drain and stop of FTS services during update to new site firewall.
+
| Testing of new interface to the tape library. During this time Castor disk services will remain up but there will be no tape access. Tape recalls will stall. Writes to tape backed service classes will carry on, with files flushed from the disk caches to tape once the testing is completed.
|-
+
| srm-cms.gridpp.rl.ac.uk
+
| UNSCHEDULED
+
| OUTAGE
+
| 14/03/2014 09:40
+
| 14/03/2014 10:26
+
| 46 minutes
+
| Problem with CMS Castor instance being investigated.
+
|-
+
| srm-cms.gridpp.rl.ac.uk
+
| UNSCHEDULED
+
| OUTAGE
+
| 14/03/2014 04:15
+
| 14/03/2014 07:15
+
| 3 hours
+
| Currently investigtating problems with Oracle DB behind Castor CMS
+
 
|}
 
|}
 
<!-- **********************End GOC DB Entries************************** ----->
 
<!-- **********************End GOC DB Entries************************** ----->
Line 166: Line 188:
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
! GGUS ID !! Level !! Urgency !! State !! Creation !! Last Update !! VO !! Subject
 
|-
 
|-
| 101968
+
| 102902
 
| Green
 
| Green
 +
| Urgent
 +
| In Progress
 +
| 2014-04-01
 +
| 2014-04-02
 +
| MICE & NA62
 +
| Stale .cvmfswhitelist file MICE VO
 +
|-
 +
| 102611
 +
| Green
 +
| Urgent
 +
| In Progress
 +
| 2014-03-24
 +
| 2014-03-24
 +
|
 +
| NAGIOS *eu.egi.sec.Argus-EMI-1* failed on argusngi.gridpp.rl.ac.uk@RAL-LCG2
 +
|-
 +
| 101968
 +
| Yellow
 
| Less Urgent
 
| Less Urgent
 
| On Hold
 
| On Hold
 
| 2014-03-11
 
| 2014-03-11
| 2014-03-12
+
| 2014-0-01
 
| Atlas
 
| Atlas
 
| RAL-LCG2_SCRATCHDISK: One dataset to delete is causing 1379 deletion errors
 
| RAL-LCG2_SCRATCHDISK: One dataset to delete is causing 1379 deletion errors
Line 177: Line 217:
 
| 101079
 
| 101079
 
| Red
 
| Red
| Urgent
+
| Less Urgent
 
| In Progress
 
| In Progress
 
| 2014-02-09
 
| 2014-02-09
| 2014-03-17
+
| 2014-04-01
 
|  
 
|  
 
| ARC CEs have VOViews with a default SE of "0"  
 
| ARC CEs have VOViews with a default SE of "0"  
|-
 
| 101052
 
| Red
 
| Urgent
 
| In Progress
 
| 2014-02-06
 
| 2014-03-17
 
| Biomed
 
| Can't retrieve job result file from cream-ce02.gridpp.rl.ac.uk
 
 
|-
 
|-
 
| 99556
 
| 99556
 
| Red
 
| Red
 
| Very Urgent
 
| Very Urgent
| In Progress
+
| On Hold
 
| 2013-12-06
 
| 2013-12-06
| 2014-03-06
+
| 2014-03-21
 
|  
 
|  
 
| NGI Argus requests for NGI_UK  
 
| NGI Argus requests for NGI_UK  
Line 205: Line 236:
 
| Red
 
| Red
 
| Urgent
 
| Urgent
| Waiting Reply
+
| In Progress
 
| 2013-10-21
 
| 2013-10-21
 
| 2014-03-13
 
| 2014-03-13
 
| SNO+
 
| SNO+
| please configure cvmfs stratum-0 for SNO+ at RAL T1  
+
| please configure cvmfs stratum-0 for SNO+ at RAL T1
|-
+
| 97025
+
| Red
+
| Less urgent
+
| On Hold
+
| 2013-09-03
+
| 2014-03-04
+
|
+
| Myproxy server certificate does not contain hostname
+
 
|}
 
|}
 
<!-- **********************End GGUS Tickets************************** ----->
 
<!-- **********************End GGUS Tickets************************** ----->

Latest revision as of 14:16, 2 April 2014

RAL Tier1 Operations Report for 2nd April 2014

Review of Issues during the fortnight 19th March to 2nd April 2014.
  • There was a short (around 5 minute) break in external connectivity to the Tier1 during the morning of Thursday 20th March and again a similar event the following morning.
  • There was a failover of an Atlas Castor Database early evening on Tuesday 25th March. The failover triggered a call-out and the database was put back onto its allocated node. The cause is a bug that has been reported to Oracle.
  • On Friday, 28th March, we were not running some of the CE SUM tests in a timely manner. It was found that owing to a separate change in the Condor configuration we were no longer prioritising the test jobs. This was fixed.
Resolved Disk Server Issues
  • Newly deployed server GDSS721 (LHCbDst - D1T0) was taken back out of service following problems on 20th March. It was put back in production the following day after the problems were traced to errors in the network routing table and fixed.
Current operational status and issues
  • There have been problems with the CMS Castor instance in recent weeks. These are triggered by high load. Work is underway to alleviate these problems, in particular servers with faster network connections will be moved into the disk cache in front of CMS_Tape when they become available.
  • The Castor Team are now able to reproduce the intermittent failures of Castor access via the SRM that has been reported in recent weeks. Understanding of the problem is significantly adcanced and further investigations are ongoing using the Castor Preprod instance. Ideas for a workaround are being developed.
  • As reported before, working with Atlas the file deletion rate was somewhat improved. However, there is still a problem that needs to be understood.
  • Around 50 files in tape backed service classes (mainly in GEN) have been found not to have migrated to tape. This is under investigation. The cause for some of these is understood (a bad tape at time of migration).
  • Problems with the infrastructure used to host many of our non-Catsor services have largely been worked around, although not yet fixed. Some additional migrations of VMs has been necessary.
Ongoing Disk Server Issues
  • GDSS239 (Atlas HotDisk) crashed this morning. This is being investigated.
Notable Changes made this last fortnight.
  • The rollout of of WNs updated to the EMI-3 version of WN continues and is expected to be completed this week.
  • The EMI3 Argus server is being rolled out for use across all CEs and WNs.
  • The old MyProxy server (lcgrbp01.gridpp.rl.ac.uk) has just been turned off today. Its replacement (myproxy.gridpp.rl.ac.uk) is in production.
  • The 2013 purchases of worker nodes are being added to the farm this week.
  • Two of the CV2013 disk servers (120TB each) have been added to LHCbDst. A further 9 are being added today. Three further servers are in CMS non-prod awaiting being moved into production imminently.
Declared in the GOC DB
Service Scheduled? Outage/At Risk Start End Duration Reason
srm-lhcb-tape.gridpp.rl.ac.uk UNSCHEDULED WARNING 03/04/2014 08:00 03/04/2014 09:30 1 hour and 30 minutes Warning during further testing of new tape interface (ACSLS),
lcgrbp01.gridpp.rl.ac.uk SCHEDULED OUTAGE 02/04/2014 12:00 01/05/2014 12:00 29 days, System be decommissioned. (Replaced my myproxy.gridpp.rl.ac.uk).
Advanced warning for other interventions
The following items are being discussed and are still to be formally scheduled and announced.
  • The installion of the new Tier1 Routing layer and the change in the way the Tier1 connects to the RAL network is expected to take place in one of the two weeks following Easter.

Listing by category:

  • Databases:
    • Switch LFC/FTS/3D to new Database Infrastructure.
  • Castor:
    • Castor 2.1.14 testing is largely complete. (A non-Tier1 production Castor instance was successfully upgraded yesterday, 1st April.) We are starting to look at possible dates for rolling this out (probably around May).
  • Networking:
    • Update core Tier1 network and change connection to site and OPN including:
      • Install new Routing layer for Tier1 & change the way the Tier1 connects to the RAL network.
      • These changes will lead to the removal of the UKLight Router.
  • Fabric
    • We are phasing out the use of the software server used by the small VOs.
    • Firmware updates on remaining EMC disk arrays (Castor, FTS/LFC)
    • There will be circuit testing of the remaining (i.e. non-UPS) circuits in the machine room during 2014.
Entries in GOC DB starting between the 19th March and 2nd April 2014.
Service Scheduled? Outage/At Risk Start End Duration Reason
lcgrbp01.gridpp.rl.ac.uk, SCHEDULED OUTAGE 02/04/2014 12:00 01/05/2014 12:00 29 days, System be decommissioned. (Replaced my myproxy.gridpp.rl.ac.uk).
All Castor endpoints (All SRMs) SCHEDULED WARNING 01/04/2014 09:00 01/04/2014 11:00 2 hours Testing of new interface to the tape library. During this time Castor disk services will remain up but there will be no tape access. Tape recalls will stall. Writes to tape backed service classes will carry on, with files flushed from the disk caches to tape once the testing is completed.
srm-lhcb-tape.gridpp.rl.ac.uk, SCHEDULED OUTAGE 01/04/2014 09:00 01/04/2014 11:00 2 hours Testing of new interface to the tape library. During this time Castor disk services will remain up but there will be no tape access. Tape recalls will stall. Writes to tape backed service classes will carry on, with files flushed from the disk caches to tape once the testing is completed.
Open GGUS Tickets (Snapshot during morning of meeting)
GGUS ID Level Urgency State Creation Last Update VO Subject
102902 Green Urgent In Progress 2014-04-01 2014-04-02 MICE & NA62 Stale .cvmfswhitelist file MICE VO
102611 Green Urgent In Progress 2014-03-24 2014-03-24 NAGIOS *eu.egi.sec.Argus-EMI-1* failed on argusngi.gridpp.rl.ac.uk@RAL-LCG2
101968 Yellow Less Urgent On Hold 2014-03-11 2014-0-01 Atlas RAL-LCG2_SCRATCHDISK: One dataset to delete is causing 1379 deletion errors
101079 Red Less Urgent In Progress 2014-02-09 2014-04-01 ARC CEs have VOViews with a default SE of "0"
99556 Red Very Urgent On Hold 2013-12-06 2014-03-21 NGI Argus requests for NGI_UK
98249 Red Urgent In Progress 2013-10-21 2014-03-13 SNO+ please configure cvmfs stratum-0 for SNO+ at RAL T1
Availability Report

Key: Atlas HC = Atlas HammerCloud (Queue ANALY_RAL_SL6, Template 508); CMS HC = CMS HammerCloud

Day OPS Alice Atlas CMS LHCb Atlas HC CMS HC Comment
19/03/14 100 100 100 88.6 100 99 73 Multiple SRM test failures (load problems).
20/03/14 100 100 99.7 99.6 100 100 n/a Atlas: One SRM Test failure; CMS - CE Test failures on all 3 Arc-ce’s (no compatible resources).
21/03/14 100 100 100 100 100 100 n/a
22/03/14 100 100 100 100 100 100 n/a
23/03/14 100 100 100 100 100 100 n/a
24/03/14 100 100 100 100 100 100 n/a
25/03/14 100 100 99.0 89.8 100 98 99 Atlas: Castor database problem (Atlas_srm DB moved to another RAC node following a DB crash); CMS SRM SUM test failures separated through day.
26/03/14 100 100 100 87.1 100 100 99 Four separate SRM test failures.
27/03/14 100 100 100 96.5 100 97 100 Two test failures of SRM Put test.
28/03/14 100 100 100 100 100 100 100
29/03/14 100 100 100 100 100 99 100
30/03/14 100 100 100 100 100 100 99
31/03/14 100 100 100 100 100 100 99
01/04/14 100 100 100 100 100 100 99