In the end I cut my losses and switched to centos and used drbd to replicate the filesystem between the two servers because I ran out of time. I was running ZFS on top of a raid 10 hardware raid for the other features of ZFS. But, I didn't have a flash drive handy to see if that would really help with performance. I built a similar system to what you have and was under impressed with the performance. Software arrays are very powerful and gives you more control over how your array performs, but the flip side is with this power you also get complexity. ![]() After the drive is replaces, you must go back into the software configuration and "insert" the new drive into the array. ![]() For software raid you must go into the management software and "eject" the drive from the raid array, then you can remove the physical drive. The array will automatically begin to "repair" itself. For example when a drive fails in a hardware raid setup, the technician just swaps the drive and the task is done. The thing you need to keep in mind with software raid is that it is not as SMB friendly as hardware raid is. Just out of curiosity if I was to bail from ZFS and go with a RAID10 array with this hardware.īefore you bail on this kit, setup your flash drive for your zilog to see how that impacts your performance. ![]() I've run the following test on my volume:ĭd if=/dev/zero of=testfile bs=1024 count=100000ġ02400000 bytes transferred in 0.512641 secs (199749848 bytes/sec)ĭd if=testfile of=/dev/zero bs=1024 count=100000ġ02400000 bytes transferred in 0.202251 secs (506301682 bytes/sec)Īre there any other tests I should be doing to determine if it's my volume or something with my shares maybe? I'm planning to setup Link Aggregation shortly and want to be sure that I'm doing everything I can to saturate that 2Gb connection. I've got autotune enabled but I'm not sure how/if I need to fine tune my settings any more to improve performance. I'm getting around 100MB/s read performance on average but my writes wind up around 50-55MB/s tops and often times less. I've been having some issues with my CIFS write performance. I just built a FreeNAS server and created a 6 drive RAIDz2 pool.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |