[clue] btrfs vs ZFS question

Sean LeBlanc seanleblanc at comcast.net
Sun Mar 31 11:39:06 MDT 2019


I think he might have meant me, but you saw it first and probably had 
more info anyway, so it works out. :)

My experience with ZFS has - so far - been somewhat at arms' length. 
I've been using it via FreeNAS and about the only thing I've done of any 
consequence is replace each drive, let it resilver, then move on to the 
other, until the entire set has been expanded. *knocks on wood* I kind 
of want this sort of storage to be boring, but reliable.

 From what I can tell - and I only looked a little bit about 5 years ago 
or so - btrfs has more promise as far as features, and is not a pain to 
get to work under Linux (as opposed to things like ZoL), but in the 
opinion of some at the time, btrfs seemed a bit more, um, sketchy. ZFS 
had the advantage of a lot of research early on by Sun/Oracle, and then 
the OpenZFS fork made it for the world and move beyond just Solaris. 
It's a shame that it seems mostly still confined to FreeBSD. I don't 
mind FreeBSD, and actually like a few things about it, but I realize 
that easy Linux interop is going to make adoption much higher.

Seems that btrfs is much more mature now and probably has more features 
than OpenZFS? Since Dennis' links prompted me to do more reading on it 
again, it does seem the CoW feature per file is an interesting one for 
sure, if I understand it correctly.

Also based on comments or in articles themselves, I still may take a Pi 
and use that as a way of shipping deltas from my ZFS pools to a Pi 
running FreeBSD. Someone had mentioned they were doing incremental 
backups of very large dataset (53Tb?) to a Pi in this way. Seems a good 
way to have some (extra) assurances of your data - at least if you are 
already using ZFS.

On 3/27/19 9:33 PM, Shawn Perry wrote:
>
> I’m assuming you mean me, so I’ll answer.
>
>  1. You can add. You should add in the same pattern that already
>     exists to maintain performance and redundancy. If you have a 4
>     disk raid 5, you should add 4 more disks in a raid 5 config.
>      1. You cannot remove yet. 0.8x will allow removing, but only to
>         cover accidental adds.
>  2. You can resize up. If you replace a disk with a larger one, you
>     can expand the space. If you add more disks, you can use the extra
>     space.
>      1. You cannot shrink or remove.
>  3. The data does not need balancing unless you add disks. To
>     rebalance, you would need to re-copy the data. You can use
>     send/recv to do that. You’d need to stop things to do this. The
>     actual stoppage will be only the amount of time it takes you to
>     type “zfs rename <source> <destination>” twice. Once to move the
>     old out of the way, once to move the new back to the original
>     location.
>  4. Sorta. You can split mirrors in a raid 1 or raid 10 config to drop
>     down to a single disk or raid 0, respectively. You cannot reshape
>     like md or btrfs.
>
> *From: *Dennis J Perkins <mailto:dennisjperkins at comcast.net>
> *Sent: *Wednesday, March 27, 2019 9:23 PM
> *To: *CLUE's mailing list <mailto:clue at cluedenver.org>
> *Subject: *[clue] btrfs vs ZFS question
>
> Sean, does ZFS let you do these things?
>
> Btrfs lets you do the following without stopping anything:
>
> 1. Add or remove partitions.  If you remove a partition, make sure the
>
> remaining drives have enough capacity.
>
> 2. Resize a btrfs system.
>
> 3. Balance the data.
>
> 4. Switch between single disk, RAID 0, RAID 1, or RAID 10 configs.
>
> Shuffling data around as a result of any of these operatins is done in
>
> the background and might take hours.
>
> _______________________________________________
>
> clue mailing list: clue at cluedenver.org
>
> For information, account preferences, or to unsubscribe see:
>
> http://cluedenver.org/mailman/listinfo/clue
>
>
> _______________________________________________
> clue mailing list: clue at cluedenver.org
> For information, account preferences, or to unsubscribe see:
> http://cluedenver.org/mailman/listinfo/clue


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://cluedenver.org/pipermail/clue/attachments/20190331/b42676ab/attachment.html 


More information about the clue mailing list