From dennisjperkins at comcast.net Sat Mar 9 14:39:01 2019 From: dennisjperkins at comcast.net (Dennis J Perkins) Date: Sat, 09 Mar 2019 14:39:01 -0700 Subject: [clue] btrfs In-Reply-To: References: Message-ID: Sean, I've learned more about btrfs. Btrfs is meant to be a modern file system that can handle large amounts of data in the exabyte range, epandable using is own lvm, and with an emphasis on reliability. Reliability is done by having checksums on each data and metadata block, and if an error is detected, hopefully the mirrored block is good. Mirroring is done by RAID. It doesn't use mdadm for RAID because it needs to be able to access the mirrored copy if the data block is bad and mdadm RAID doesn't give it access. Btrfs's logical volume management is different from LVM because you can't set volume size. You simply add a drive and btrfs incorporates it into the file system. This feels like the JBOD that Grayhole offers. By default btrfs creates two copies of the metadata and one copy of the data if there is only one drive, but you can disable the metadata copy. If you only have a single SSD, trfs has one copy of the data and metadata. If you have two drives, the default is to mirror the metadata and stripe the data. You can change this independently for metadata and data by using the -m and -d options and specirying raid0 (stripe) or raid1 (mirror). If you have more than two drives, you need to understand that mirroring puts the data or metadata on two drives only. Striping goes across several drives, but I don't know what the maximum number is. If mirroring is selected, btrfs does its best to spread the data onto all ofthe drives, but each data block is only on two drives. See the diagram in the previous email to see how three drives can be used. The drives don't need to be the same size, but some of the drive or drives might not be used. If you just want as much data space as possible, and you don't care about striping or mirroring, you can set -m and -d to single, and the drives will look like one large drive. You can scrub the filesystem manually or periodically to fix errors. It will check every data and metadata block for checksum errors, it will replace that block with the mirrored block unless it is also bad. I assume it lso checks each mirrored block for errors, but I didn't find confirmation. There are some optimizations when using SSDs, like avoiding unnecessary seek optimizations, and writing in clusters, even if the writes are for separate files. This results in greater throughput but more seeks later on. You can format an entire drive without stting up a partition first. Some people advise against this but I haven't seen any reason given other than btrfs might not be properly aligned. I don't know if this is true because it probably knows how to calculate alignment. If you are going to use a boot or a swap partition, you will need to partition the drive. At the moment, swap files are not recommended, but this might change. It's also not recommended to put a database or vitual machines in btrfs unless the file or its directory are set up to not do Copy on Write. On Sun, 2019-02-24 at 21:59 -0700, Dennis J Perkins wrote: > Sean, here is what btrfs offers, assuming I haven't misunderstood > something. You can compare this against ZFS, since you are using it. > > Btrfs tries to ensure data integrity. I know that some bugs suggest > the opposite. Checksums for files or datablocks (I'm not sure which) > are stored with the metadata. If an error is detected and RAID 1, > the > other copy of the data is checked and if it is good, it can be copied > to the first drive to replace the bad file or block. I don't know if > it overwrites the original or writes to a different location. I > don't > know how it handles an error with RAID 10. > > File compression is enabled by default. It can be turned off, or you > can select compression algorithms. I think it can be configured to > not > compress a file if it is already compressed. > > It uses copy on write (CoW) instead of journalling. This allows > snapshots if you have set up subvolumes. It is supposed to also > impreve the life of an SSD because there is no journal to write to. > > If you sete up subvolumes, you can create snapshots very quickly > because you are not copying files. The link structure is copied > instead. If you then modify a file, CoW is used on the file in the > subvolume. The block can't be deleted because the snapshot is > pointing > to it. You can use the snapshot to back up the subvolume or to > restore > it. > > CoW causes fragmentation, but autodefragmentation is available. I > don't know if trim would be a better choice when using SSD's. > > Btrfs has its own volume manager and RAID. They don't work quite the > same as LVM or mdadm. Volume management lets you create a pool. Add > a > drive and the pool gets larger. You can't specify the size of a > volume > but I don't know why you would need to. > > Subvolumes are kind of like partitions, but they can grow or shrink. > > RAID 0, 1, and 10 are supported. RAID 5 and 6 should not be used > because they are still working on a solution to the write hole > problem. > > I don't know about regular RAID, but RAID 1 in btrfs has two copies > of > each file. If the drives in the pool are different sizes, btrfs > handles making sure that all data is on two drives, but not > necessarily > the same drives. For example, if you have three 2 TB drives, you > have > 3 TB of useful space with data spread on all three drives. > > > +-------+ +----------+ +---------+ > > | | | | | > > 1TB |------------->| 1TB | +--------->| 1TB | > > | | | | | | > +-------+ +----------+ | +---------+ > > 1TB |----+ | 1TB |-----+ | | > > | | | | +----->| 1TB | > +-------+ | +----------+ | | | > +------------------------------+ +---------+ > > > If the drives are different sizes, sometimes not all of a drive in > the > pool will be used. From dennisjperkins at comcast.net Sat Mar 9 14:49:54 2019 From: dennisjperkins at comcast.net (Dennis J Perkins) Date: Sat, 09 Mar 2019 14:49:54 -0700 Subject: [clue] btrfs on a single drive In-Reply-To: References: Message-ID: <46d796ea89e137e8eaeb6133fff7a075e7af11cf.camel@comcast.net> I wasn't sure what advantage there would be to putting btrfs on a single drive. The answer is snapshots. And if you are using an SSD, the SSD optimizations. Maybe data error warnings. I dn't know if there is 1a program that scans a log file and notifies you about a problem. From dennisjperkins at comcast.net Sun Mar 24 21:35:54 2019 From: dennisjperkins at comcast.net (Dennis J Perkins) Date: Sun, 24 Mar 2019 21:35:54 -0600 Subject: [clue] generations of file systems Message-ID: <451bcff7486135a490a7364bd24a6aae08d2dddf.camel@comcast.net> Jim Salter has an interesting article abut this on Ars Technica. Generation 0: No system. Just n aritrary stream of data on punchcards, cassettes, tape, etc. Generation 1: Early filesystems without directories or metadata. CP/M, Apple DOS, etc. Generation 2: Directories added to better handle the amount of storage space on hard drives. MS-DoS 2.0. Generation 3: Metadata. Unix, Macintosh. Generation 4. Journaling. Ext3, NTFS, etc. Generation 5: CoW , built-in volume management, per-block checksums, self-healing arrays. ZFS, Btrfs, ReFS. Maybe Bcachefs and Stratis? From DLWillson at TheGeek.NU Mon Mar 25 06:16:19 2019 From: DLWillson at TheGeek.NU (DLWillson) Date: Mon, 25 Mar 2019 06:16:19 -0600 Subject: [clue] generations of file systems Message-ID: Where does XFS fit?David L. Willson720-333-LANS -------- Original message -------- From: Dennis J Perkins Date: 3/24/19 9:35 PM (GMT-07:00) To: CLUE's mailing list Subject: [clue] generations of file systems Jim Salter has an interesting article abut this on Ars Technica.Generation 0: No system.? Just n aritrary stream of data on punchcards,cassettes, tape, etc.Generation 1: Early filesystems without directories or metadata.? CP/M,Apple DOS, etc.Generation 2: Directories added to better handle the amount of storagespace on hard drives.? MS-DoS 2.0.Generation 3: Metadata.? Unix, Macintosh.Generation 4.? Journaling.? Ext3, NTFS, etc.Generation 5: CoW , built-in volume management, per-block checksums,self-healing arrays.? ZFS, Btrfs, ReFS.? Maybe Bcachefs and Stratis?_______________________________________________clue mailing list: clue at cluedenver.orgFor information, account preferences, or to unsubscribe see:http://cluedenver.org/mailman/listinfo/clue -------------- next part -------------- An HTML attachment was scrubbed... URL: http://cluedenver.org/pipermail/clue/attachments/20190325/b1ac8bd1/attachment.html From djperkins77 at gmail.com Mon Mar 25 06:20:56 2019 From: djperkins77 at gmail.com (djperkins77) Date: Mon, 25 Mar 2019 06:20:56 -0600 Subject: [clue] generations of file systems In-Reply-To: Message-ID: <5c98c7ab.1c69fb81.f365d.a881@mx.google.com> I don't know much about it.if it doesn't have a journal, I guess3rd generation.Sent from my Galaxy Tab? S2 -------- Original message --------From: DLWillson Date: 3/25/19 6:16 AM (GMT-07:00) To: CLUE's mailing list Subject: Re: [clue] generations of file systems Where does XFS fit?David L. Willson720-333-LANS-------- Original message --------From: Dennis J Perkins Date: 3/24/19 9:35 PM (GMT-07:00) To: CLUE's mailing list Subject: [clue] generations of file systems Jim Salter has an interesting article abut this on Ars Technica.Generation 0: No system.? Just n aritrary stream of data on punchcards,cassettes, tape, etc.Generation 1: Early filesystems without directories or metadata.? CP/M,Apple DOS, etc.Generation 2: Directories added to better handle the amount of storagespace on hard drives.? MS-DoS 2.0.Generation 3: Metadata.? Unix, Macintosh.Generation 4.? Journaling.? Ext3, NTFS, etc.Generation 5: CoW , built-in volume management, per-block checksums,self-healing arrays.? ZFS, Btrfs, ReFS.? Maybe Bcachefs and Stratis?_______________________________________________clue mailing list: clue at cluedenver.orgFor information, account preferences, or to unsubscribe see:http://cluedenver.org/mailman/listinfo/clue -------------- next part -------------- An HTML attachment was scrubbed... URL: http://cluedenver.org/pipermail/clue/attachments/20190325/e8abb2e2/attachment.html From chris at fedde.us Mon Mar 25 09:51:59 2019 From: chris at fedde.us (Chris Fedde) Date: Mon, 25 Mar 2019 09:51:59 -0600 Subject: [clue] generations of file systems In-Reply-To: <451bcff7486135a490a7364bd24a6aae08d2dddf.camel@comcast.net> References: <451bcff7486135a490a7364bd24a6aae08d2dddf.camel@comcast.net> Message-ID: Link to article? On Sun, Mar 24, 2019 at 9:36 PM Dennis J Perkins wrote: > Jim Salter has an interesting article abut this on Ars Technica. > > Generation 0: No system. Just n aritrary stream of data on punchcards, > cassettes, tape, etc. > > Generation 1: Early filesystems without directories or metadata. CP/M, > Apple DOS, etc. > > Generation 2: Directories added to better handle the amount of storage > space on hard drives. MS-DoS 2.0. > > Generation 3: Metadata. Unix, Macintosh. > > Generation 4. Journaling. Ext3, NTFS, etc. > > Generation 5: CoW , built-in volume management, per-block checksums, > self-healing arrays. ZFS, Btrfs, ReFS. Maybe Bcachefs and Stratis? > > _______________________________________________ > clue mailing list: clue at cluedenver.org > For information, account preferences, or to unsubscribe see: > http://cluedenver.org/mailman/listinfo/clue > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://cluedenver.org/pipermail/clue/attachments/20190325/47a004fb/attachment.html From chris at fedde.us Mon Mar 25 10:06:24 2019 From: chris at fedde.us (Chris Fedde) Date: Mon, 25 Mar 2019 10:06:24 -0600 Subject: [clue] generations of file systems In-Reply-To: References: Message-ID: JFS, XFS are in "Gen 4" by this nomenclature. UFS2 and FFS/LFS start approaching Gen5 but don't fully integrate volume management like BTRFS and ZFS do. It does get kind of fuzzy if you include LVM as part of the "disk management system" as the contemporary systems do. While it's not strictly part of the 'file system' it is well part of the overall stack useful for dealing with storage. Advanced disk controllers can blur the lines even further. One of the hardest things I've see junior and even some senior admins (aka devops engineers) work through is all the layering that goes into a modern server technology stack and that's before we get into distributed disk access and network file systems, virtualization, containerization and cloud. I know that there are zeros and ones somewhere down near the bottom but I'm not sure I can keep all of it straight any more. chris On Mon, Mar 25, 2019 at 6:16 AM DLWillson wrote: > Where does XFS fit? > > David L. Willson > 720-333-LANS > > > -------- Original message -------- > From: Dennis J Perkins > Date: 3/24/19 9:35 PM (GMT-07:00) > To: CLUE's mailing list > Subject: [clue] generations of file systems > > Jim Salter has an interesting article abut this on Ars Technica. > > Generation 0: No system. Just n aritrary stream of data on punchcards, > cassettes, tape, etc. > > Generation 1: Early filesystems without directories or metadata. CP/M, > Apple DOS, etc. > > Generation 2: Directories added to better handle the amount of storage > space on hard drives. MS-DoS 2.0. > > Generation 3: Metadata. Unix, Macintosh. > > Generation 4. Journaling. Ext3, NTFS, etc. > > Generation 5: CoW , built-in volume management, per-block checksums, > self-healing arrays. ZFS, Btrfs, ReFS. Maybe Bcachefs and Stratis? > > _______________________________________________ > clue mailing list: clue at cluedenver.org > For information, account preferences, or to unsubscribe see: > http://cluedenver.org/mailman/listinfo/clue > _______________________________________________ > clue mailing list: clue at cluedenver.org > For information, account preferences, or to unsubscribe see: > http://cluedenver.org/mailman/listinfo/clue -------------- next part -------------- An HTML attachment was scrubbed... URL: http://cluedenver.org/pipermail/clue/attachments/20190325/ae7eee67/attachment-0001.html From dennisjperkins at comcast.net Mon Mar 25 21:21:08 2019 From: dennisjperkins at comcast.net (Dennis J Perkins) Date: Mon, 25 Mar 2019 21:21:08 -0600 Subject: [clue] generations of file systems In-Reply-To: References: <451bcff7486135a490a7364bd24a6aae08d2dddf.camel@comcast.net> Message-ID: <0da54395cf71f2b35a7bb6f2c82dcfb75077af96.camel@comcast.net> https://arstechnica.com/information-technology/2014/01/bitrot-and-atomic-cows-inside-next-gen-filesystems/ On Mon, 2019-03-25 at 09:51 -0600, Chris Fedde wrote: > Link to article? > > On Sun, Mar 24, 2019 at 9:36 PM Dennis J Perkins < > dennisjperkins at comcast.net> wrote: > > Jim Salter has an interesting article abut this on Ars Technica. > > > > > > > > Generation 0: No system. Just n aritrary stream of data on > > punchcards, > > > > cassettes, tape, etc. > > > > > > > > Generation 1: Early filesystems without directories or metadata. > > CP/M, > > > > Apple DOS, etc. > > > > > > > > Generation 2: Directories added to better handle the amount of > > storage > > > > space on hard drives. MS-DoS 2.0. > > > > > > > > Generation 3: Metadata. Unix, Macintosh. > > > > > > > > Generation 4. Journaling. Ext3, NTFS, etc. > > > > > > > > Generation 5: CoW , built-in volume management, per-block > > checksums, > > > > self-healing arrays. ZFS, Btrfs, ReFS. Maybe Bcachefs and > > Stratis? > > > > > > > > _______________________________________________ > > > > clue mailing list: clue at cluedenver.org > > > > For information, account preferences, or to unsubscribe see: > > > > http://cluedenver.org/mailman/listinfo/clue > > > > _______________________________________________clue mailing list: > clue at cluedenver.org > For information, account preferences, or to unsubscribe see: > http://cluedenver.org/mailman/listinfo/clue -------------- next part -------------- An HTML attachment was scrubbed... URL: http://cluedenver.org/pipermail/clue/attachments/20190325/7739693c/attachment.html From dennisjperkins at comcast.net Wed Mar 27 21:23:36 2019 From: dennisjperkins at comcast.net (Dennis J Perkins) Date: Wed, 27 Mar 2019 21:23:36 -0600 Subject: [clue] btrfs vs ZFS question Message-ID: <8968e61f0fb80ea03c34e4b1b07fb999ef6347ee.camel@comcast.net> Sean, does ZFS let you do these things? Btrfs lets you do the following without stopping anything: 1. Add or remove partitions. If you remove a partition, make sure the remaining drives have enough capacity. 2. Resize a btrfs system. 3. Balance the data. 4. Switch between single disk, RAID 0, RAID 1, or RAID 10 configs. Shuffling data around as a result of any of these operatins is done in the background and might take hours. From shawn at redmop.com Wed Mar 27 21:33:05 2019 From: shawn at redmop.com (Shawn Perry) Date: Wed, 27 Mar 2019 21:33:05 -0600 Subject: [clue] btrfs vs ZFS question In-Reply-To: <8968e61f0fb80ea03c34e4b1b07fb999ef6347ee.camel@comcast.net> References: <8968e61f0fb80ea03c34e4b1b07fb999ef6347ee.camel@comcast.net> Message-ID: <5c9c4071.1c69fb81.545f0.517f@mx.google.com> I?m assuming you mean me, so I?ll answer. 1. You can add. You should add in the same pattern that already exists to maintain performance and redundancy. If you have a 4 disk raid 5, you should add 4 more disks in a raid 5 config. a. You cannot remove yet. 0.8x will allow removing, but only to cover accidental adds. 2. You can resize up. If you replace a disk with a larger one, you can expand the space. If you add more disks, you can use the extra space. a. You cannot shrink or remove. 3. The data does not need balancing unless you add disks. To rebalance, you would need to re-copy the data. You can use send/recv to do that. You?d need to stop things to do this. The actual stoppage will be only the amount of time it takes you to type ?zfs rename ? twice. Once to move the old out of the way, once to move the new back to the original location. 4. Sorta. You can split mirrors in a raid 1 or raid 10 config to drop down to a single disk or raid 0, respectively. You cannot reshape like md or btrfs. From: Dennis J Perkins Sent: Wednesday, March 27, 2019 9:23 PM To: CLUE's mailing list Subject: [clue] btrfs vs ZFS question Sean, does ZFS let you do these things? Btrfs lets you do the following without stopping anything: 1. Add or remove partitions. If you remove a partition, make sure the remaining drives have enough capacity. 2. Resize a btrfs system. 3. Balance the data. 4. Switch between single disk, RAID 0, RAID 1, or RAID 10 configs. Shuffling data around as a result of any of these operatins is done in the background and might take hours. _______________________________________________ clue mailing list: clue at cluedenver.org For information, account preferences, or to unsubscribe see: http://cluedenver.org/mailman/listinfo/clue -------------- next part -------------- An HTML attachment was scrubbed... URL: http://cluedenver.org/pipermail/clue/attachments/20190327/97e3db80/attachment.html From seanleblanc at comcast.net Sun Mar 31 11:39:06 2019 From: seanleblanc at comcast.net (Sean LeBlanc) Date: Sun, 31 Mar 2019 11:39:06 -0600 Subject: [clue] btrfs vs ZFS question In-Reply-To: <5c9c4071.1c69fb81.545f0.517f@mx.google.com> References: <8968e61f0fb80ea03c34e4b1b07fb999ef6347ee.camel@comcast.net> <5c9c4071.1c69fb81.545f0.517f@mx.google.com> Message-ID: I think he might have meant me, but you saw it first and probably had more info anyway, so it works out. :) My experience with ZFS has - so far - been somewhat at arms' length. I've been using it via FreeNAS and about the only thing I've done of any consequence is replace each drive, let it resilver, then move on to the other, until the entire set has been expanded. *knocks on wood* I kind of want this sort of storage to be boring, but reliable. From what I can tell - and I only looked a little bit about 5 years ago or so - btrfs has more promise as far as features, and is not a pain to get to work under Linux (as opposed to things like ZoL), but in the opinion of some at the time, btrfs seemed a bit more, um, sketchy. ZFS had the advantage of a lot of research early on by Sun/Oracle, and then the OpenZFS fork made it for the world and move beyond just Solaris. It's a shame that it seems mostly still confined to FreeBSD. I don't mind FreeBSD, and actually like a few things about it, but I realize that easy Linux interop is going to make adoption much higher. Seems that btrfs is much more mature now and probably has more features than OpenZFS? Since Dennis' links prompted me to do more reading on it again, it does seem the CoW feature per file is an interesting one for sure, if I understand it correctly. Also based on comments or in articles themselves, I still may take a Pi and use that as a way of shipping deltas from my ZFS pools to a Pi running FreeBSD. Someone had mentioned they were doing incremental backups of very large dataset (53Tb?) to a Pi in this way. Seems a good way to have some (extra) assurances of your data - at least if you are already using ZFS. On 3/27/19 9:33 PM, Shawn Perry wrote: > > I?m assuming you mean me, so I?ll answer. > > 1. You can add. You should add in the same pattern that already > exists to maintain performance and redundancy. If you have a 4 > disk raid 5, you should add 4 more disks in a raid 5 config. > 1. You cannot remove yet. 0.8x will allow removing, but only to > cover accidental adds. > 2. You can resize up. If you replace a disk with a larger one, you > can expand the space. If you add more disks, you can use the extra > space. > 1. You cannot shrink or remove. > 3. The data does not need balancing unless you add disks. To > rebalance, you would need to re-copy the data. You can use > send/recv to do that. You?d need to stop things to do this. The > actual stoppage will be only the amount of time it takes you to > type ?zfs rename ? twice. Once to move the > old out of the way, once to move the new back to the original > location. > 4. Sorta. You can split mirrors in a raid 1 or raid 10 config to drop > down to a single disk or raid 0, respectively. You cannot reshape > like md or btrfs. > > *From: *Dennis J Perkins > *Sent: *Wednesday, March 27, 2019 9:23 PM > *To: *CLUE's mailing list > *Subject: *[clue] btrfs vs ZFS question > > Sean, does ZFS let you do these things? > > Btrfs lets you do the following without stopping anything: > > 1. Add or remove partitions.? If you remove a partition, make sure the > > remaining drives have enough capacity. > > 2. Resize a btrfs system. > > 3. Balance the data. > > 4. Switch between single disk, RAID 0, RAID 1, or RAID 10 configs. > > Shuffling data around as a result of any of these operatins is done in > > the background and might take hours. > > _______________________________________________ > > clue mailing list: clue at cluedenver.org > > For information, account preferences, or to unsubscribe see: > > http://cluedenver.org/mailman/listinfo/clue > > > _______________________________________________ > clue mailing list: clue at cluedenver.org > For information, account preferences, or to unsubscribe see: > http://cluedenver.org/mailman/listinfo/clue -------------- next part -------------- An HTML attachment was scrubbed... URL: http://cluedenver.org/pipermail/clue/attachments/20190331/b42676ab/attachment.html From seanleblanc at comcast.net Sun Mar 31 11:46:58 2019 From: seanleblanc at comcast.net (Sean LeBlanc) Date: Sun, 31 Mar 2019 11:46:58 -0600 Subject: [clue] btrfs vs ZFS question In-Reply-To: <5c9c4071.1c69fb81.545f0.517f@mx.google.com> References: <8968e61f0fb80ea03c34e4b1b07fb999ef6347ee.camel@comcast.net> <5c9c4071.1c69fb81.545f0.517f@mx.google.com> Message-ID: <5eb8c77d-c869-6859-7745-00008a01dbdb@comcast.net> Another comment - I've been listening to old episodes of this (and slowly, slowly catching up) podcast. As the title implies, it's mostly about *BSD, but it does have a lot of overlap with other nixes especially and sometimes has some interesting tidbits in there - the kind of stuff that would come up in CLUE and CLUE-related meetings of the past and I think some on here could definitely relate. :) This talk of ZFS reminds me of this podcast because ZFS seems to come up a lot. At least one of the hosts is a FreeBSD committer; I'm not sure if that includes any ZFS-related work, but those two seem to know quite a bit about it. https://www.bsdnow.tv/ On 3/27/19 9:33 PM, Shawn Perry wrote: > > I?m assuming you mean me, so I?ll answer. > > 1. You can add. You should add in the same pattern that already > exists to maintain performance and redundancy. If you have a 4 > disk raid 5, you should add 4 more disks in a raid 5 config. > 1. You cannot remove yet. 0.8x will allow removing, but only to > cover accidental adds. > 2. You can resize up. If you replace a disk with a larger one, you > can expand the space. If you add more disks, you can use the extra > space. > 1. You cannot shrink or remove. > 3. The data does not need balancing unless you add disks. To > rebalance, you would need to re-copy the data. You can use > send/recv to do that. You?d need to stop things to do this. The > actual stoppage will be only the amount of time it takes you to > type ?zfs rename ? twice. Once to move the > old out of the way, once to move the new back to the original > location. > 4. Sorta. You can split mirrors in a raid 1 or raid 10 config to drop > down to a single disk or raid 0, respectively. You cannot reshape > like md or btrfs. > > *From: *Dennis J Perkins > *Sent: *Wednesday, March 27, 2019 9:23 PM > *To: *CLUE's mailing list > *Subject: *[clue] btrfs vs ZFS question > > Sean, does ZFS let you do these things? > > Btrfs lets you do the following without stopping anything: > > 1. Add or remove partitions.? If you remove a partition, make sure the > > remaining drives have enough capacity. > > 2. Resize a btrfs system. > > 3. Balance the data. > > 4. Switch between single disk, RAID 0, RAID 1, or RAID 10 configs. > > Shuffling data around as a result of any of these operatins is done in > > the background and might take hours. > > _______________________________________________ > > clue mailing list: clue at cluedenver.org > > For information, account preferences, or to unsubscribe see: > > http://cluedenver.org/mailman/listinfo/clue > > > _______________________________________________ > clue mailing list: clue at cluedenver.org > For information, account preferences, or to unsubscribe see: > http://cluedenver.org/mailman/listinfo/clue -------------- next part -------------- An HTML attachment was scrubbed... URL: http://cluedenver.org/pipermail/clue/attachments/20190331/bf0bb95d/attachment-0001.html From dennisjperkins at comcast.net Sun Mar 31 13:25:40 2019 From: dennisjperkins at comcast.net (dennisjperkins) Date: Sun, 31 Mar 2019 13:25:40 -0600 Subject: [clue] btrfs vs ZFS question In-Reply-To: Message-ID: <20190331192559.210984626BA7@cluedenver.org> Btrfs seems more flexible for snapshots, but that can also mean more? complicated if you are not careful.? You can only take snapshots of a subvolume.? You might not want a snapshot if everything in /,? like /home or /temp, but if you make these subvolumes, a snapshot of / will not include them because Btrfs won't include embedded subvolumes in a snapshot.?Sent from my Galaxy Tab? S2 -------- Original message --------From: Sean LeBlanc Date: 3/31/19 11:39 AM (GMT-07:00) To: clue at cluedenver.org Subject: Re: [clue] btrfs vs ZFS question I think he might have meant me, but you saw it first and probably had more info anyway, so it works out. :) My experience with ZFS has - so far - been somewhat at arms' length. I've been using it via FreeNAS and about the only thing I've done of any consequence is replace each drive, let it resilver, then move on to the other, until the entire set has been expanded. *knocks on wood* I kind of want this sort of storage to be boring, but reliable. From what I can tell - and I only looked a little bit about 5 years ago or so - btrfs has more promise as far as features, and is not a pain to get to work under Linux (as opposed to things like ZoL), but in the opinion of some at the time, btrfs seemed a bit more, um, sketchy. ZFS had the advantage of a lot of research early on by Sun/Oracle, and then the OpenZFS fork made it for the world and move beyond just Solaris. It's a shame that it seems mostly still confined to FreeBSD. I don't mind FreeBSD, and actually like a few things about it, but I realize that easy Linux interop is going to make adoption much higher. Seems that btrfs is much more mature now and probably has more features than OpenZFS? Since Dennis' links prompted me to do more reading on it again, it does seem the CoW feature per file is an interesting one for sure, if I understand it correctly. Also based on comments or in articles themselves, I still may take a Pi and use that as a way of shipping deltas from my ZFS pools to a Pi running FreeBSD. Someone had mentioned they were doing incremental backups of very large dataset (53Tb?) to a Pi in this way. Seems a good way to have some (extra) assurances of your data - at least if you are already using ZFS. On 3/27/19 9:33 PM, Shawn Perry wrote: I?m assuming you mean me, so I?ll answer. ? You can add. You should add in the same pattern that already exists to maintain performance and redundancy. If you have a 4 disk raid 5, you should add 4 more disks in a raid 5 config. You cannot remove yet. 0.8x will allow removing, but only to cover accidental adds. You can resize up. If you replace a disk with a larger one, you can expand the space. If you add more disks, you can use the extra space. You cannot shrink or remove. The data does not need balancing unless you add disks. To rebalance, you would need to re-copy the data. You can use send/recv to do that. You?d need to stop things to do this. The actual stoppage will be only the amount of time it takes you to type ?zfs rename ? twice. Once to move the old out of the way, once to move the new back to the original location. Sorta. You can split mirrors in a raid 1 or raid 10 config to drop down to a single disk or raid 0, respectively. You cannot reshape like md or btrfs. ? From: Dennis J Perkins Sent: Wednesday, March 27, 2019 9:23 PM To: CLUE's mailing list Subject: [clue] btrfs vs ZFS question ? Sean, does ZFS let you do these things? ? Btrfs lets you do the following without stopping anything: ? 1. Add or remove partitions.? If you remove a partition, make sure the remaining drives have enough capacity. 2. Resize a btrfs system. 3. Balance the data. 4. Switch between single disk, RAID 0, RAID 1, or RAID 10 configs. ? Shuffling data around as a result of any of these operatins is done in the background and might take hours. ? _______________________________________________ clue mailing list: clue at cluedenver.org For information, account preferences, or to unsubscribe see: http://cluedenver.org/mailman/listinfo/clue ? _______________________________________________ clue mailing list: clue at cluedenver.org For information, account preferences, or to unsubscribe see: http://cluedenver.org/mailman/listinfo/clue -------------- next part -------------- An HTML attachment was scrubbed... URL: http://cluedenver.org/pipermail/clue/attachments/20190331/c2653eb9/attachment.html From chris at fedde.us Sun Mar 31 17:00:18 2019 From: chris at fedde.us (Chris Fedde) Date: Sun, 31 Mar 2019 17:00:18 -0600 Subject: [clue] btrfs vs ZFS question In-Reply-To: <20190331192559.210984626BA7@cluedenver.org> References: <20190331192559.210984626BA7@cluedenver.org> Message-ID: The ZFS approach is typically to have way more mount points than we might have with a classic file system. Each users home directory for example might be a mount point. Any of your data, logging and other write heavy directories might each have a different mount point. Using this scheme then you can apply whatever filesystem attributes you want to a "directory" by converting it to a mount point. There are workflows that make this kind of migration pretty easy. Eventually it begins to seem "normal" to work this way. Of course it's not too hard to normalize anything. ZFS itself remembers the configuration, so management of all these mount points is not as burdensome is it might seem at first. On Sun, Mar 31, 2019 at 1:26 PM dennisjperkins wrote: > Btrfs seems more flexible for snapshots, but that can also mean more > complicated if you are not careful. You can only take snapshots of a > subvolume. You might not want a snapshot if everything in /, like /home > or /temp, but if you make these subvolumes, a snapshot of / will not > include them because Btrfs won't include embedded subvolumes in a snapshot. > > > > Sent from my Galaxy Tab? S2 > > -------- Original message -------- > From: Sean LeBlanc > Date: 3/31/19 11:39 AM (GMT-07:00) > To: clue at cluedenver.org > Subject: Re: [clue] btrfs vs ZFS question > > I think he might have meant me, but you saw it first and probably had more > info anyway, so it works out. :) > > My experience with ZFS has - so far - been somewhat at arms' length. I've > been using it via FreeNAS and about the only thing I've done of any > consequence is replace each drive, let it resilver, then move on to the > other, until the entire set has been expanded. *knocks on wood* I kind of > want this sort of storage to be boring, but reliable. > > From what I can tell - and I only looked a little bit about 5 years ago or > so - btrfs has more promise as far as features, and is not a pain to get to > work under Linux (as opposed to things like ZoL), but in the opinion of > some at the time, btrfs seemed a bit more, um, sketchy. ZFS had the > advantage of a lot of research early on by Sun/Oracle, and then the OpenZFS > fork made it for the world and move beyond just Solaris. It's a shame that > it seems mostly still confined to FreeBSD. I don't mind FreeBSD, and > actually like a few things about it, but I realize that easy Linux interop > is going to make adoption much higher. > > Seems that btrfs is much more mature now and probably has more features > than OpenZFS? Since Dennis' links prompted me to do more reading on it > again, it does seem the CoW feature per file is an interesting one for > sure, if I understand it correctly. > > Also based on comments or in articles themselves, I still may take a Pi > and use that as a way of shipping deltas from my ZFS pools to a Pi running > FreeBSD. Someone had mentioned they were doing incremental backups of very > large dataset (53Tb?) to a Pi in this way. Seems a good way to have some > (extra) assurances of your data - at least if you are already using ZFS. > > On 3/27/19 9:33 PM, Shawn Perry wrote: > > I?m assuming you mean me, so I?ll answer. > > > > 1. You can add. You should add in the same pattern that already exists > to maintain performance and redundancy. If you have a 4 disk raid 5, you > should add 4 more disks in a raid 5 config. > 1. You cannot remove yet. 0.8x will allow removing, but only to > cover accidental adds. > 2. You can resize up. If you replace a disk with a larger one, you can > expand the space. If you add more disks, you can use the extra space. > 1. You cannot shrink or remove. > 3. The data does not need balancing unless you add disks. To > rebalance, you would need to re-copy the data. You can use send/recv to do > that. You?d need to stop things to do this. The actual stoppage will be > only the amount of time it takes you to type ?zfs rename > ? twice. Once to move the old out of the way, once to move the > new back to the original location. > 4. Sorta. You can split mirrors in a raid 1 or raid 10 config to drop > down to a single disk or raid 0, respectively. You cannot reshape like md > or btrfs. > > > > *From: *Dennis J Perkins > *Sent: *Wednesday, March 27, 2019 9:23 PM > *To: *CLUE's mailing list > *Subject: *[clue] btrfs vs ZFS question > > > > Sean, does ZFS let you do these things? > > > > Btrfs lets you do the following without stopping anything: > > > > 1. Add or remove partitions. If you remove a partition, make sure the > > remaining drives have enough capacity. > > 2. Resize a btrfs system. > > 3. Balance the data. > > 4. Switch between single disk, RAID 0, RAID 1, or RAID 10 configs. > > > > Shuffling data around as a result of any of these operatins is done in > > the background and might take hours. > > > > _______________________________________________ > > clue mailing list: clue at cluedenver.org > > For information, account preferences, or to unsubscribe see: > > http://cluedenver.org/mailman/listinfo/clue > > > > _______________________________________________ > clue mailing list: clue at cluedenver.org > For information, account preferences, or to unsubscribe see:http://cluedenver.org/mailman/listinfo/clue > > > _______________________________________________ > clue mailing list: clue at cluedenver.org > For information, account preferences, or to unsubscribe see: > http://cluedenver.org/mailman/listinfo/clue -------------- next part -------------- An HTML attachment was scrubbed... URL: http://cluedenver.org/pipermail/clue/attachments/20190331/cd39b2c3/attachment-0001.html From dennisjperkins at comcast.net Sun Mar 31 18:33:24 2019 From: dennisjperkins at comcast.net (Dennis JPerkins) Date: Sun, 31 Mar 2019 17:33:24 -0700 Subject: [clue] btrfs vs ZFS question In-Reply-To: References: <8968e61f0fb80ea03c34e4b1b07fb999ef6347ee.camel@comcast.net> <5c9c4071.1c69fb81.545f0.517f@mx.google.com> Message-ID: I bought a couple of cheap SSD's to test Btrfs with on a RPi. I just need to stay in town long enough to do anything. On Sun, 2019-03-31 at 11:39 -0600, Sean LeBlanc wrote: > I think he might have meant me, but you > saw it first and probably had more info anyway, so it works > out. > :) > > > > > > My experience with ZFS has - so far - > been somewhat at arms' length. I've been using it via FreeNAS > and > about the only thing I've done of any consequence is replace > each > drive, let it resilver, then move on to the other, until the > entire set has been expanded. *knocks on wood* I kind of want > this > sort of storage to be boring, but reliable. > > > > > > > > From what I can tell - and I only > looked a little bit about 5 years ago or so - btrfs has more > promise as far as features, and is not a pain to get to work > under > Linux (as opposed to things like ZoL), but in the opinion of > some > at the time, btrfs seemed a bit more, um, sketchy. ZFS had the > advantage of a lot of research early on by Sun/Oracle, and then > the OpenZFS fork made it for the world and move beyond just > Solaris. It's a shame that it seems mostly still confined to > FreeBSD. I don't mind FreeBSD, and actually like a few things > about it, but I realize that easy Linux interop is going to > make > adoption much higher. > > > > > > Seems that btrfs is much more mature > now and probably has more features than OpenZFS? Since Dennis' > links prompted me to do more reading on it again, it does seem > the > CoW feature per file is an interesting one for sure, if I > understand it correctly. > > > > > > Also based on comments or in articles > themselves, I still may take a Pi and use that as a way of > shipping deltas from my ZFS pools to a Pi running FreeBSD. > Someone > had mentioned they were doing incremental backups of very large > dataset (53Tb?) to a Pi in this way. Seems a good way to have > some > (extra) assurances of your data - at least if you are already > using ZFS. > > > > > > > > On 3/27/19 9:33 PM, Shawn Perry wrote: > > > > > > > > > > > > > > I?m assuming you mean me, so I?ll answer. > > > > > > You can add. > > You should add in the same pattern that already exists > > to > > maintain performance and redundancy. If you have a 4 > > disk > > raid 5, you should add 4 more disks in a raid 5 config. > > > > You cannot > > remove yet. 0.8x will allow removing, but only to > > cover > > accidental adds. > > > > You can > > resize up. If you replace a disk with a larger one, you > > can > > expand the space. If you add more disks, you can use > > the > > extra space. > > > > You cannot > > shrink or remove. > > > > The data > > does not need balancing unless you add disks. To > > rebalance, > > you would need to re-copy the data. You can use > > send/recv to > > do that. You?d need to stop things to do this. The > > actual > > stoppage will be only the amount of time it takes you > > to > > type ?zfs rename ? twice. > > Once to move the old out of the way, once to move the > > new > > back to the original location. > > Sorta. You > > can split mirrors in a raid 1 or raid 10 config to drop > > down > > to a single disk or raid 0, respectively. You cannot > > reshape > > like md or btrfs. > > > > > > > > From: > > Dennis J Perkins > > > > Sent: Wednesday, March 27, 2019 9:23 PM > > > > To: CLUE's mailing list > > > > Subject: [clue] btrfs vs ZFS question > > > > > > Sean, does ZFS let you do these things? > > > > Btrfs lets you do the following without > > stopping anything: > > > > 1. Add or remove partitions. If you remove > > a partition, make sure the > > remaining drives have enough capacity. > > 2. Resize a btrfs system. > > 3. Balance the data. > > 4. Switch between single disk, RAID 0, RAID > > 1, or RAID 10 configs. > > > > Shuffling data around as a result of any of > > these operatins is done in > > the background and might take hours. > > > > _______________________________________________ > > clue mailing list: clue at cluedenver.org > > For information, account preferences, or to > > unsubscribe see: > > http://cluedenver.org/mailman/listinfo/clue > > > > > > > > > > > > _______________________________________________clue mailing > > list: clue at cluedenver.org > > For information, account preferences, or to unsubscribe see: > > http://cluedenver.org/mailman/listinfo/clue > > > > > > > > > > _______________________________________________clue mailing list: > clue at cluedenver.org > For information, account preferences, or to unsubscribe see: > http://cluedenver.org/mailman/listinfo/clue -------------- next part -------------- An HTML attachment was scrubbed... URL: http://cluedenver.org/pipermail/clue/attachments/20190331/bb1c7872/attachment.html From dennisjperkins at comcast.net Sun Mar 31 18:46:45 2019 From: dennisjperkins at comcast.net (Dennis JPerkins) Date: Sun, 31 Mar 2019 17:46:45 -0700 Subject: [clue] btrfs vs ZFS question In-Reply-To: References: <20190331192559.210984626BA7@cluedenver.org> Message-ID: You don't seem to need to mount a subvolume but everyone seems to do it, because if you have a problem, you can unmount it and mount the latest snapshot. According to one website, Suse has subvolumes for /, /backups. /home, /opt, /root, /srv, /usr/local, /tmp, and /var. Some are so you don't overwrite data if you do a rollback. /tmp and /var hold temporary files, so there's no need to snapshot them. This is too complicated for some people, so they only have subvolumes for / and /home. I don't know if you need to make a subvoolume for /tmp if you are using tmpfs or ZRAM. I'd never heard of ZRAM but NextcloudPi uses it for /tmp. Suse has a utility called snapper that make it easy to manage subvolumes, but I haven't looked at it yet. Btrfs apparently can handle swap files now. You want to truncate it every time you boot to get rid of old data, and then use fallocate to expand it back to the size you want before running mkswap and swapon on it. You also need to run "sudo chattr +C swapfile" to disable copy on write. Maybe there's a systemd unit file for this. On Sun, 2019-03-31 at 17:00 -0600, Chris Fedde wrote: > The ZFS approach is typically to have way more mount points than we > might have with a classic file system. Each users home directory for > example might be a mount point. Any of your data, logging and other > write heavy directories might each have a different mount point. > Using this scheme then you can apply whatever filesystem attributes > you want to a "directory" by converting it to a mount point. There > are workflows that make this kind of migration pretty easy. > Eventually it begins to seem "normal" to work this way. Of course > it's not too hard to normalize anything. > > ZFS itself remembers the configuration, so management of all these > mount points is not as burdensome is it might seem at first. > > > On Sun, Mar 31, 2019 at 1:26 PM dennisjperkins < > dennisjperkins at comcast.net> wrote: > > Btrfs seems more flexible for snapshots, but that can also mean > > more complicated if you are not careful. You can only take > > snapshots of a subvolume. You might not want a snapshot if > > everything in /, like /home or /temp, but if you make these > > subvolumes, a snapshot of / will not include them because Btrfs > > won't include embedded subvolumes in a snapshot. > > > > > > Sent from my Galaxy Tab? S2 > > -------- Original message -------- > > From: Sean LeBlanc > > Date: 3/31/19 11:39 AM (GMT-07:00) > > To: clue at cluedenver.org > > Subject: Re: [clue] btrfs vs ZFS question > > > > > > I think he might have meant me, but you > > saw it first and probably had more info anyway, so it works > > out. > > :) > > > > > > > > My experience with ZFS has - so far - > > been somewhat at arms' length. I've been using it via FreeNAS > > and > > about the only thing I've done of any consequence is replace > > each > > drive, let it resilver, then move on to the other, until the > > entire set has been expanded. *knocks on wood* I kind of want > > this > > sort of storage to be boring, but reliable. > > > > > > > > > > > > From what I can tell - and I only > > looked a little bit about 5 years ago or so - btrfs has more > > promise as far as features, and is not a pain to get to work > > under > > Linux (as opposed to things like ZoL), but in the opinion of > > some > > at the time, btrfs seemed a bit more, um, sketchy. ZFS had > > the > > advantage of a lot of research early on by Sun/Oracle, and > > then > > the OpenZFS fork made it for the world and move beyond just > > Solaris. It's a shame that it seems mostly still confined to > > FreeBSD. I don't mind FreeBSD, and actually like a few things > > about it, but I realize that easy Linux interop is going to > > make > > adoption much higher. > > > > > > > > Seems that btrfs is much more mature > > now and probably has more features than OpenZFS? Since > > Dennis' > > links prompted me to do more reading on it again, it does > > seem the > > CoW feature per file is an interesting one for sure, if I > > understand it correctly. > > > > > > > > Also based on comments or in articles > > themselves, I still may take a Pi and use that as a way of > > shipping deltas from my ZFS pools to a Pi running FreeBSD. > > Someone > > had mentioned they were doing incremental backups of very > > large > > dataset (53Tb?) to a Pi in this way. Seems a good way to have > > some > > (extra) assurances of your data - at least if you are already > > using ZFS. > > > > > > > > > > > > On 3/27/19 9:33 PM, Shawn Perry wrote: > > > > > > > > > > > > > > > > > > > > > I?m assuming you mean me, so I?ll answer. > > > > > > > > > You can add. > > > You should add in the same pattern that already > > > exists to > > > maintain performance and redundancy. If you have a 4 > > > disk > > > raid 5, you should add 4 more disks in a raid 5 > > > config. > > > > > > You cannot > > > remove yet. 0.8x will allow removing, but only to > > > cover > > > accidental adds. > > > > > > You can > > > resize up. If you replace a disk with a larger one, > > > you can > > > expand the space. If you add more disks, you can use > > > the > > > extra space. > > > > > > You cannot > > > shrink or remove. > > > > > > The data > > > does not need balancing unless you add disks. To > > > rebalance, > > > you would need to re-copy the data. You can use > > > send/recv to > > > do that. You?d need to stop things to do this. The > > > actual > > > stoppage will be only the amount of time it takes you > > > to > > > type ?zfs rename ? twice. > > > Once to move the old out of the way, once to move the > > > new > > > back to the original location. > > > Sorta. You > > > can split mirrors in a raid 1 or raid 10 config to > > > drop down > > > to a single disk or raid 0, respectively. You cannot > > > reshape > > > like md or btrfs. > > > > > > > > > > > > From: > > > Dennis J Perkins > > > > > > Sent: Wednesday, March 27, 2019 9:23 PM > > > > > > To: CLUE's mailing list > > > > > > Subject: [clue] btrfs vs ZFS question > > > > > > > > > Sean, does ZFS let you do these things? > > > > > > Btrfs lets you do the following without > > > stopping anything: > > > > > > 1. Add or remove partitions. If you remove > > > a partition, make sure the > > > remaining drives have enough capacity. > > > 2. Resize a btrfs system. > > > 3. Balance the data. > > > 4. Switch between single disk, RAID 0, RAID > > > 1, or RAID 10 configs. > > > > > > Shuffling data around as a result of any of > > > these operatins is done in > > > the background and might take hours. > > > > > > _______________________________________________ > > > clue mailing list: clue at cluedenver.org > > > For information, account preferences, or to > > > unsubscribe see: > > > http://cluedenver.org/mailman/listinfo/clue > > > > > > > > > > > > > > > > > > _______________________________________________clue mailing > > > list: clue at cluedenver.org > > > For information, account preferences, or to unsubscribe see: > > > http://cluedenver.org/mailman/listinfo/clue > > > > > > > > > > > > > > > _______________________________________________ > > > > clue mailing list: clue at cluedenver.org > > > > For information, account preferences, or to unsubscribe see: > > > > http://cluedenver.org/mailman/listinfo/clue > > _______________________________________________clue mailing list: > clue at cluedenver.org > For information, account preferences, or to unsubscribe see: > http://cluedenver.org/mailman/listinfo/clue -------------- next part -------------- An HTML attachment was scrubbed... URL: http://cluedenver.org/pipermail/clue/attachments/20190331/ec170d1a/attachment-0001.html