#ZFS is an awesome storage tool. If you use it or would like to use it and want to master it, wouldn't it be nice to have a book about that? Wonderful news: @mwl is writing that book! https://www.tiltedwindmillpress.com/product/openzfs-sponsor/
A bunch of popular #FreeBSD ports are now up for grabs, MAINTAINER-wise.
https://cgit.freebsd.org/ports/commit/?id=f12c037f5a354e15cd62541300de9ca6325401db
The drives I got off ebay still contained data:
https://dan.langille.org/2025/11/29/the-latest-satadom-drives-contained-data-when-i-got-them/
@dvl Assume K drives of S. raid10 has useful space K/2 * S. Here, 16T. raidz2 is (K-2) * S, or 24T. Probability calc is non-trivial, but let's assume 3+ failures is negligible, and either can handle any 1 failure. So it's all about which subsets of the 2-failure cases can be survived. For raidz2, it's all of them, 8*7/2=28. For raid10, 4 of those combinations would result in data loss, if both halves of a raid1 fail, so only 24 of the 2-failure scenarios are survivable. And 16T vs 24T
@dvl
The "Work Involved" - Setup, Snapshots, and Maintenance
ZFS is often called "administrator-friendly" because it consolidates many traditional storage tasks (volume management, RAID, filesystems) into one toolset. The "work" is different, not necessarily more.
Read more at filebin.
Markdown file will be autom. deleted in 6 days.
Did you know the #ZFS snapshot directory doesn't need to be spelled out?
[17:43 r730-01 dvl /jails/dev-ingress01/.zfs] % ls -l
total 0
dr-xr-xr-x+ 36 root wheel 36 2025.11.27 17:30 snapshot/
I can use snap instead of snapshot
[17:43 r730-01 dvl /jails/dev-ingress01/.zfs] % cd snap
[17:44 r730-01 dvl /jails/dev-ingress01/.zfs/snap] % ls -l
total 289
drwxr-xr-x 22 root wheel 26 2025.09.17 19:48 autosnap_2025-11-21_00:00:14_daily/
...
drwxr-xr-x 22 root wheel 26 2025.04.11 11:35 mkjail-202509051453/
[17:44 r730-01 dvl /jails/dev-ingress01/.zfs/snap] %
Occasional reminder that you can reference http://http.cat/ any time you need to look up HTTP Status codes.
It's fantastic to pull up on a video call when you're sharing your screen.
@dvl is it not easier to manage one big zpool instead of many small...
One big 16TB zpool (8 x 4TB SSDs) or 2 x 8TB zpools?
I got decisions to make now that all this stuff has come together.
https://dan.langille.org/2025/11/26/creating-a-new-zpool-for-r730-01/
The script uses creation_time if "no scrub was done".
I suppose that's a fair compromise.
Today I learned that periodic/daily/800.scrub: does not initiate a scrub on a new zpool (i.e. a zpool which has never been scrubbed).
Well, perhaps it might scrub one day, but it didn't scrub last night.
I had time (while sitting in a cafe on Chestnut St in downtown Philadelphia) to play around with the SATADOM for the r730-01 host...
I realized a few key components where not present in the host:
1 - the SATADOM devices - they're still in a bag on the dining room table
2 - the mfsBSD thumbdrive - that is attached to the r730-04 test host
I *could* reboot r730-04 off the SATA drives, freeing up the other SATADOMs in that host. Then I could zfs send | recv from r730-01 into r730-04. I could allow root ssh for that.
Nope. Not going to do it. I'll wait until I'm home. Perhaps Tuesday morning.
The last test before doing this on production.
I moved zroot from a larger zpool to a smaller zpool. It is now a very straight forward process. The hardest part may be making use the old zroot no longer boots.
dvl@FreeBSD.org
I've been contributing to open source since 1998.