Changes to BtrFS between r11 and r12
''Butter Filesystem. Hold the toast.''
I've started experimenting with ''BtrFS'' which aims to provide an "advanced and modern filesystem" (heavily compared to ZFS) on Linux. With my new workstation I've started using BtrFS for my home directories (`/home`) and my build directories (`/mnt/slackbuilds`) to gain exposure to the filesystem and compare it to ZFS and EXT4 on LVM (all of my other data, including my root disk is on EXT4 on LVM).
I have used ZFS heavily in the past, and using BtrFS is significantly different as many of the fundamental concepts are different. BtrFS has no concept of "pools" or "volume groups" -- instead there are "volumes." BtrFS has no concept of "datasets" or "logical volumes" -- instead there are "subvolumes".
Here's a comparison between [http://hub.opensolaris.org/bin/view/Community+Group+zfs/WebHome | ZFS], [https://btrfs.wiki.kernel.org/index.php/Main_Page | BtrFS], and [https://ext4.wiki.kernel.org/index.php/Main_Page | EXT4] on [http://sources.redhat.com/lvm2/wiki/ | LVM]:
(blank line)
'''General'''
%| | ZFS | BtrFS | EXT4 and LVM |%
&| '''Commands Involved''' | `zpool`, `zfs` | `mkfs.btrfs`, `btrfs` | `pvcreate`, `vgcreate`, `lvcreate`, `mkfs.ext4` |&
&| '''Can be Boot filesystem''' | Yes | No | No |&
&| '''Can be Root filesystem''' | Yes | Yes | Yes |&
&| '''Can provide swapspace''' | Yes (zvols) | No | Yes (lvm) |&
&| '''Pool of disks''' | "zpool" | "volume" | "volume group" |&
&| '''Mountable unit''' | "dataset" | "volume" and "subvolume" | "logical volume" |&
&| '''OSes with Implementations''' | Solaris, OpenSolaris, Nexenta, FreeBSD, Mac OS X, Linux | Linux | Linux |&
&| '''Stability''' | Stable | Unstable (On-disk format Stable) | Stable |&
&| '''CLI-System Integration [[1]]''' | Strong | Weak | Mild |&
(blank line)
'''Features'''
%| | ZFS | BtrFS | EXT4 and LVM |%
&| '''Grow Online''' | Yes | Yes | Yes |&
&| '''Shrink''' | No | Online | Offline |&
&| '''Shrink Pool''' | No | Online | Online |&
&| '''Shrink Filesystem''' | No | Online | Offline |&
&| '''Replace Disk (without parity)'''| Yes (must be compatible size disk)| Yes | Yes (copies space allocated on disk) |&
&| '''Filesystem Level Storage Pooling''' | Yes | Yes | No |&
&| '''Re-balance''' | No | Yes | Can be done manually (`pvmove`) |&
&| '''Stability''' | Stable | Unstable (On-disk format Stable) | Stable |&
&| '''Can be Root filesystem''' | Yes | Yes | Yes |&
&| '''Can provide swapspace''' | Yes (zvols) | No | Yes (lvm) |&
&| '''Pool of disks''' | "zpool" | "volume" | "volume group" |&
&| '''Mountable unit''' | "dataset" | "volume" and "subvolume" | "logical volume" |&
&| '''Checksumming''' | Yes | Yes | No |&
&| '''Correctable Checksum Errors''' | Yes | ??? | No |&
&| '''Autocorrect Checksum Errors''' | Yes | ??? | No |&
&| '''Compression''' | Yes | Yes | No |&
&| '''De-duplication''' | Yes | No | No |&
&| '''Ditto Blocks''' | Yes | ??? | No |&
&| '''Tiered Caching''' | Yes | No | No |&
&| '''Writable Snapshots''' | Yes (clone) | Yes | Yes |&
&| '''Copy-on-Write''' | Fast, space-efficient | Fast, space-efficient | Slow, requires pre-allocating an LV |&
&| '''Redundancy''' | Mirroring and Parity (x1, x2, x3) | Mirroring | Mirroring, though you can software raid the PVs |&
&| '''CLI-System Integration [[1]]''' | Strong | Weak | Mild |&
(blank line)
(blank line)
&| '''Redundancy''' | Mirroring and Parity (x1, x2, x3) | Mirroring | Mirroring, though the PVs can be RAID devices |&
(blank line)
'''Limits'''
%| | ZFS | BtrFS | EXT4 and L V |%
&| '''Maximum Volume Size'' | 16 Exabytes | 16 Exabytes | 1 Exabyte |&
&| '''Maximum File Size''' | 16 Exabytes | 16 Exabytes | 16 Terabytes |&
&| '''Maximum Number of Snapshots''' | ''Unlimited'' | ''Unlimited'' | Effectively 32 |&
(blank line)
[[1]] For lack of a better term -- how well the command line interface integrates with the system as a whole, this might be subjective.
(blank line)
For a more complete, but less focused comparison see [http://en.wikipedia.org/wiki/Comparison_of_file_systems | Wikipedia's Comparison of Filesystems]
----
'''The Rosetta Stone'''
1. ''Task:'' Create pool of storage from disks `/dev/A`, `/dev/B`, and `/dev/C` --> 1. ''Task:'' Create pool of storage from disks `/dev/A`, `/dev/B`, and `/dev/C` (striped or linear concat)
11. Using ZFS:
111. `# zpool create TESTPOOL A B C`
11. Using BtrFS:
111. `# mkfs.btrfs /dev/A /dev/B /dev/C`
11. Using EXT4 on LVM:
111. `# pvcreate /dev/A /dev/B /dev/C`
111. `# vgcreate TESTPOOL /dev/A /dev/B /dev/C`
1. ''Task:'' Make storage from pool available to system
11. Using ZFS:
111. `# zfs set mountpoint=/data TESTPOOL`
11. Using BtrFS:
111. `# mkdir /data`
111. `# mount -t btrfs /dev/A /data`
111. Update `/etc/fstab`
11. Using EXT4 on LVM:
111. `# mkdir /data`
111. `# lvcreate -L ''SizeOfVolume'' -n DATA TESTPOOL`
111. `# mkfs -t ext4 /dev/TESTPOOL/DATA`
111. `# mount /dev/TESTPOOL/DATA /data`
111. Update `/etc/fstab`
1. ''Task:'' --> 1. ''Task:'' Replace disk (without parity)
11. Using ZFS:
111. `# zfs replace TESTPOOL A D`
11. Using BtrFS:
111. `# btrfs device add D /data`
111. `# btrfs device delete A /data`
111. `# btrfs filesystem balance /data`
11. Using EXT4 on LVM:
(blank line)
111. `# pvcreate /dev/D`
111. `# vgextend TESTPOOl /dev/D`
111. `# pvmove TESTPOOL /dev/A /dev/D`
111. `# vgreduce TESTPOOL /dev/A`
(blank line)
1. ''Task:''
11. Using ZFS:
11. Using BtrFS:
11. Using EXT4 on LVM:
Only in r11
Only in r12 --> Modified slightly between r11 and r12