Difference between revisions of "ZFS-Tests"
From GridPP Wiki
(4 intermediate revisions by one user not shown) | |||
Line 2: | Line 2: | ||
− | <u>for tool testscript.sh:</u> | + | <u>for tool [http://ebert.homelinux.org/Scripts/testscript.sh testscript.sh]:</u> |
* averaged sequential/parallel read: sum of all bytes over all files divided by total time taken | * averaged sequential/parallel read: sum of all bytes over all files divided by total time taken | ||
* averaged sequential/parallel write: bytes per file divided by (total time taken divided by total number of files written) | * averaged sequential/parallel write: bytes per file divided by (total time taken divided by total number of files written) | ||
Line 11: | Line 11: | ||
{|border="1" style="text-align:left;" | {|border="1" style="text-align:left;" | ||
|- | |- | ||
− | ! | + | ! Running number |
| 1|| 2 || 3||4|| | | 1|| 2 || 3||4|| | ||
|- | |- | ||
Line 24: | Line 24: | ||
|- | |- | ||
! Number of Disks | ! Number of Disks | ||
− | | 33, 11 per raidz2|| 33, 11 per raid6|| 33, 11 per raid6||35 in single | + | | 33, 11 per raidz2|| 33, 11 per raid6|| 33, 11 per raid6||35 in single ZFS vdev|| |
|- | |- | ||
! Kind of Disks | ! Kind of Disks | ||
Line 63: | Line 63: | ||
! normal averaged parallel small writes | ! normal averaged parallel small writes | ||
| 74MB/s|| 15MB/s|| 20MB/s||69MB/s|| | | 74MB/s|| 15MB/s|| 20MB/s||69MB/s|| | ||
+ | |- | ||
+ | | || || || || | ||
|- | |- | ||
! degrated averaged sequential read | ! degrated averaged sequential read | ||
| 532MB/s|| 274MB/s|| 266MB/s||423MB/s|| | | 532MB/s|| 274MB/s|| 266MB/s||423MB/s|| | ||
− | |||
− | |||
|- | |- | ||
! degrated averaged parallel read | ! degrated averaged parallel read | ||
Line 111: | Line 111: | ||
* for 1) ZFS write test during rebuild not comparable since the rebuild was always faster finished then the test (in 2h compared to 21h for hardware raid) | * for 1) ZFS write test during rebuild not comparable since the rebuild was always faster finished then the test (in 2h compared to 21h for hardware raid) | ||
* for 1-4) all read tests based on the same mix of files | * for 1-4) all read tests based on the same mix of files | ||
+ | * for 1-4) done with testscript.sh as explained above the table |
Latest revision as of 13:40, 30 January 2017
This page is to track different configurations that were tested under different setups, raid systems, and file systems. If one needs to add comments to the own test setup, please use the running number and add a comment to it under the table.
for tool testscript.sh:
- averaged sequential/parallel read: sum of all bytes over all files divided by total time taken
- averaged sequential/parallel write: bytes per file divided by (total time taken divided by total number of files written)
- parallel read/write default: 10 parallel reads and writes for large files, 20 parallel writes for small files
- other default: 5.1 TB used of mixed files to read, 100x54GB and 200,000x131kB files written
Running number | 1 | 2 | 3 | 4 | |
---|---|---|---|---|---|
Filesystem | ZFS | XFS | Ext4 | ZFS | |
Special filesystem config | xattr=sa, relatime=on | xattr=sa,relatime=on | |||
Kind of raid | 3xraidz2 | raid60 | raid60 | 1xraidz3 | |
Number of Disks | 33, 11 per raidz2 | 33, 11 per raid6 | 33, 11 per raid6 | 35 in single ZFS vdev | |
Kind of Disks | 2TB SAS, 7200k, 6Gbps | 2TB SAS, 7200k, 6Gbps | 2TB SAS, 7200k, 6Gbps | 2TB SAS,7200k, 6Gbps | |
Controller | PERC H800 (33xraid0) | PERC H800 | PERC H800 | PERC H800 (33xraid0) | |
Controller Cache used | yes (512 MB) | yes (512 MB) | yes (512MB) | yes (512MB) | |
RAM | 12GB | 12GB | 12GB | 12GB | |
CPU | 2xE5620, 2.4GHz, HT on | 2xE5620, 2.4GHz, HT on | 2xE6520, 2.4GHz, HT on | 2xE6520, 2.4GHz, HT on | |
OS | SL6 | SL6 | SL6 | SL6 | |
normal averaged sequential read | 536MB/s | 856MB/s | 607MB/s | 422MB/s | |
normal averaged parallel read | 746MB/s | 538MB/s | 533MB/s | 543MB/s | |
normal averaged sequential large writes | 709MB/s | 667MB/s | 348MB/s | 677MB/s | |
normal averaged parallel large writes | 680MB/s | 314MB/s | 140MB/s | 660MB/s | |
normal averaged sequential small writes | 10MB/s | 2MB/s | 2MB/s | 10MB/s | |
normal averaged parallel small writes | 74MB/s | 15MB/s | 20MB/s | 69MB/s | |
degrated averaged sequential read | 532MB/s | 274MB/s | 266MB/s | 423MB/s | |
degrated averaged parallel read | 707MB/s | 280MB/s | 298MB/s | 529MB/s | |
degrated averaged sequential large writes | 706MB/s | 657MB/s | 298MB/s | 644MB/s | |
degrated averaged parallel large writes | 695MB/s | 319MB/s | 141MB/s | 653MB/s | |
degrated averaged sequential small writes | 10MB/s | 2MB/s | 2MB/s | 10MB/s | |
degrated averaged parallel small writes | 74MB/s | 17MB/s | 15MB/s | 68MB/s | |
rebuild, averaged sequential read | 401MB/s | 247MB/s | 246MB/s | 316MB/s | |
rebuild, averaged parallel read | 533MB/s | 267MB/s | 274MB/s | 417MB/s | |
rebuild, averaged sequential large writes | - | 367MB/s | 239MB/s | 215MB/s | |
rebuild, averaged parallel large writes | - | 279MB/s | 129MB/s | 216MB/s | |
rebuild, averaged sequential small writes | 6MB/s | 3MB/s | 2MB/s | 7MB/s | |
rebuild, averaged parallel small writes | 31MB/s | 17MB/s | 19MB/s | 41MB/s |
Comments:
- for 1) Files for read test where on compressed area of ZFS (root files, tgz files, LSST data files)
- for 1) ZFS write test during rebuild not comparable since the rebuild was always faster finished then the test (in 2h compared to 21h for hardware raid)
- for 1-4) all read tests based on the same mix of files
- for 1-4) done with testscript.sh as explained above the table