|
|
Line 11: |
Line 11: |
| | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 20th November to the 27th November 2018. | | | style="background-color: #b7f1ce; border-bottom: 1px solid silver; text-align: center; font-size: 1em; font-weight: bold; margin-top: 0; margin-bottom: 0; padding-top: 0.1em; padding-bottom: 0.1em;" | Review of Issues during the week 20th November to the 27th November 2018. |
| |} | | |} |
− | * PPD Tier-2 reverted its switch to IPv6. This resolved a variety of problems for the Tier-2 some of which were being blamed on the Tier-1 (e.g. FTS service failures). | + | * The batch farm was drained and rebooted, to apply the security patch for CVE-2018-18955. |
| | | |
− | * We believe the CMS AAA issues have been resolved. We will close the tickets but are continuing to look at ways of building more resilience in to the service. The change that appears to have fixed the problems were made on the 23rd November. This change reduced the chunk size being requested from Echo from 64MB to 4MB meaning that small data requests would be served much faster, with a slight (~10%) reduction in performance for large data requests. These changes were only made to the CMS AAA service. Since then the SAM tests have been all passing. 90% of the the CMS AAA Hammer Cloud test jobs have been passing, which is extremely good. The Hammer Cloud tests involve jobs at other sites requesting data from RAL. There can be a significant failure rate that is nothing to do with the site and anything above 70% success rate is considered a pass. The throughput on the proxy machines is much more balanced with the new chunk size in place. | + | * CMS SAM tests are still appearing as “missing”. We believe that CMS is aware of this problem and are manually correcting it while working on a fix. |
| | | |
− | * The problem report by NA62 in the previous week, when they couldn’t recall data in a timely manner, was the result of a forgotten cron job on the new system. This cron assigns new media to tape pools as they run short. The ATLAS tape pool ran out of tapes and a 200 000 file backlog built up before this was noticed. The tape system priorities writing to tape above recalls (to ensure data is safe), and therefore once the problem was fixed, the next ~48 hours were dominated with clearing this back log. | + | * While completing the weighting up of the ClusterVision 17 storage nodes, the Ceph Manager Daemons crashed and were unavailable for 30 minutes. This did not impact any production work, as despite their grand name, they simply providing monitoring information about the cluster. We believe this is a bug in the web dashboard and have disabled it for now and informed the Ceph developers. |
− | | + | |
− | * Since 22nd November, SAM tests against the (Old - tape only) CMS Castor instance appear to be "missing" from the reports (and in some plots appear to indicate 100% failure). If we check the actual results they are passing. Do not currently understand the issue, migration to new endpoint is only a week away.
| + | |
| <!-- ***********End Review of Issues during last week*********** -----> | | <!-- ***********End Review of Issues during last week*********** -----> |
| <!-- *********************************************************** -----> | | <!-- *********************************************************** -----> |