[clue] btrfs vs ZFS question

Dennis JPerkins dennisjperkins at comcast.net
Sun Mar 31 18:46:45 MDT 2019


You don't seem to need to mount a subvolume but everyone seems to do
it, because if you have a problem, you can unmount it and mount the
latest snapshot.

According to one website, Suse has subvolumes for /, /backups. /home,
/opt, /root, /srv, /usr/local, /tmp, and /var.  Some are so you don't
overwrite data if you do a rollback.  /tmp and /var hold temporary
files, so there's no need to snapshot them.  This is too complicated
for some people, so they only have subvolumes for / and /home.

I don't know if you need to make a subvoolume for /tmp if you are using
tmpfs or ZRAM.  I'd never heard of ZRAM but NextcloudPi uses it for
/tmp.

Suse has a utility called snapper that make it easy to manage
subvolumes, but I haven't looked at it yet.
  
Btrfs apparently can handle swap files now.  You want to truncate it
every time you boot to get rid of old data, and then use fallocate to
expand it back to the size you want before running mkswap and swapon on
it.  You also need to run "sudo chattr +C swapfile" to disable copy on
write.  Maybe there's a systemd unit file for this.

On Sun, 2019-03-31 at 17:00 -0600, Chris Fedde wrote:
> The ZFS approach is typically to have way more mount points than we
> might have with a classic file system. Each users home directory for
> example might be a mount point.  Any of your data, logging and other
> write heavy directories might each have a different mount point.   
> Using this scheme then you can apply whatever filesystem attributes
> you want to a "directory" by converting it to a mount point.  There
> are workflows that make this kind of migration pretty easy.  
> Eventually it begins to seem "normal" to work this way.  Of course
> it's not too hard to normalize anything.
> 
> ZFS itself remembers the configuration, so management of all these
> mount points is not as burdensome is it might seem at first. 
> 
> 
> On Sun, Mar 31, 2019 at 1:26 PM dennisjperkins <
> dennisjperkins at comcast.net> wrote:
> > Btrfs seems more flexible for snapshots, but that can also mean
> > more  complicated if you are not careful.  You can only take
> > snapshots of a subvolume.  You might not want a snapshot if
> > everything in /,  like /home or /temp, but if you make these
> > subvolumes, a snapshot of / will not include them because Btrfs
> > won't include embedded subvolumes in a snapshot. 
> > 
> > 
> > Sent from my Galaxy Tab® S2
> > -------- Original message --------
> > From: Sean LeBlanc <seanleblanc at comcast.net> 
> > Date: 3/31/19  11:39 AM  (GMT-07:00) 
> > To: clue at cluedenver.org 
> > Subject: Re: [clue] btrfs vs ZFS question 
> > 
> > 
> >     I think he might have meant me, but you
> >       saw it first and probably had more info anyway, so it works
> > out.
> >       :)
> >     
> > 
> >     
> >     My experience with ZFS has - so far -
> >       been somewhat at arms' length. I've been using it via FreeNAS
> > and
> >       about the only thing I've done of any consequence is replace
> > each
> >       drive, let it resilver, then move on to the other, until the
> >       entire set has been expanded. *knocks on wood* I kind of want
> > this
> >       sort of storage to be boring, but reliable.
> > 
> >     
> >     
> > 
> >     
> >     From what I can tell - and I only
> >       looked a little bit about 5 years ago or so - btrfs has more
> >       promise as far as features, and is not a pain to get to work
> > under
> >       Linux (as opposed to things like ZoL), but in the opinion of
> > some
> >       at the time, btrfs seemed a bit more, um, sketchy. ZFS had
> > the
> >       advantage of a lot of research early on by Sun/Oracle, and
> > then
> >       the OpenZFS fork made it for the world and move beyond just
> >       Solaris. It's a shame that it seems mostly still confined to
> >       FreeBSD. I don't mind FreeBSD, and actually like a few things
> >       about it, but I realize that easy Linux interop is going to
> > make
> >       adoption much higher.
> >     
> > 
> >     
> >     Seems that btrfs is much more mature
> >       now and probably has more features than OpenZFS? Since
> > Dennis'
> >       links prompted me to do more reading on it again, it does
> > seem the
> >       CoW feature per file is an interesting one for sure, if I
> >       understand it correctly.
> >     
> > 
> >     
> >     Also based on comments or in articles
> >       themselves, I still may take a Pi and use that as a way of
> >       shipping deltas from my ZFS pools to a Pi running FreeBSD.
> > Someone
> >       had mentioned they were doing incremental backups of very
> > large
> >       dataset (53Tb?) to a Pi in this way. Seems a good way to have
> > some
> >       (extra) assurances of your data - at least if you are already
> >       using ZFS.
> > 
> >     
> >     
> > 
> >     
> >     On 3/27/19 9:33 PM, Shawn Perry wrote:
> > 
> >     
> >     
> > >       
> > >       
> > >       
> > >       
> > >         I’m assuming you mean me, so I’ll answer.
> > >          
> > >         
> > >           You can add.
> > >             You should add in the same pattern that already
> > > exists to
> > >             maintain performance and redundancy. If you have a 4
> > > disk
> > >             raid 5, you should add 4 more disks in a raid 5
> > > config.
> > >           
> > >             You cannot
> > >               remove yet. 0.8x will allow removing, but only to
> > > cover
> > >               accidental adds.
> > >           
> > >           You can
> > >             resize up. If you replace a disk with a larger one,
> > > you can
> > >             expand the space. If you add more disks, you can use
> > > the
> > >             extra space.
> > >           
> > >             You cannot
> > >               shrink or remove.
> > >           
> > >           The data
> > >             does not need balancing unless you add disks. To
> > > rebalance,
> > >             you would need to re-copy the data. You can use
> > > send/recv to
> > >             do that. You’d need to stop things to do this. The
> > > actual
> > >             stoppage will be only the amount of time it takes you
> > > to
> > >             type “zfs rename <source> <destination>” twice.
> > >             Once to move the old out of the way, once to move the
> > > new
> > >             back to the original location.
> > >           Sorta. You
> > >             can split mirrors in a raid 1 or raid 10 config to
> > > drop down
> > >             to a single disk or raid 0, respectively. You cannot
> > > reshape
> > >             like md or btrfs.
> > >         
> > >          
> > >         
> > >           From:
> > >             Dennis J Perkins
> > > 
> > >             Sent: Wednesday, March 27, 2019 9:23 PM
> > > 
> > >             To: CLUE's mailing list
> > > 
> > >             Subject: [clue] btrfs vs ZFS question
> > >         
> > >          
> > >         Sean, does ZFS let you do these things?
> > >          
> > >         Btrfs lets you do the following without
> > >           stopping anything:
> > >          
> > >         1. Add or remove partitions.  If you remove
> > >           a partition, make sure the
> > >         remaining drives have enough capacity.
> > >         2. Resize a btrfs system.
> > >         3. Balance the data.
> > >         4. Switch between single disk, RAID 0, RAID
> > >           1, or RAID 10 configs.
> > >          
> > >         Shuffling data around as a result of any of
> > >           these operatins is done in
> > >         the background and might take hours.
> > >          
> > >         _______________________________________________
> > >         clue mailing list: clue at cluedenver.org
> > >         For information, account preferences, or to
> > >           unsubscribe see:
> > >         http://cluedenver.org/mailman/listinfo/clue
> > >          
> > >       
> > >       
> > > 
> > >       
> > >       _______________________________________________clue mailing
> > > list: clue at cluedenver.org
> > > For information, account preferences, or to unsubscribe see:
> > > http://cluedenver.org/mailman/listinfo/clue
> > >     
> > 
> >     
> > 
> >     
> >   
> > _______________________________________________
> > 
> > clue mailing list: clue at cluedenver.org
> > 
> > For information, account preferences, or to unsubscribe see:
> > 
> > http://cluedenver.org/mailman/listinfo/clue
> 
> _______________________________________________clue mailing list: 
> clue at cluedenver.org
> For information, account preferences, or to unsubscribe see:
> http://cluedenver.org/mailman/listinfo/clue


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://cluedenver.org/pipermail/clue/attachments/20190331/ec170d1a/attachment-0001.html 


More information about the clue mailing list