EDIT - 20191104 @ 2057 UTC-7 - Figured out how long it takes to scrub 40TB of disk space. Also did a couple of experiments with rebalancing btrfs and monitored how long it took.
A couple of weeks ago while working on Leandra I started feeling more and more dissatisfied with how I had her storage array set up. I had a bunch of 4TB hard drives inside her chassis glued together with Linux's mdadm subsystem into what amounts to a mother-huge hard drive (a RAID-5 array with a hotspare in case one blew out), and LVM on top of that which let me pretend that I was partitioning that mother-huge hard drive so I could mount large-ish pieces of it in different places. The thing is, while you can technically resize those virtual partitions (logical volumes) to reallocate space, it's not exactly easy. There's a lot of fiddly stuff that you have to do (resize the file system, resize the logical volume to match, grow the logical volume that needs space, grow the filesystem that needs space, make sure that you actually have enough space) and it gets annoying in a crisis. There was a second concern, which was figuring out which drive was the one that blew out when none of them were labelled or even had indicators of any kind that showed which drive was doing something (like throwing errors because it had crashed). This was a problem that required fairly major surgery to fix, on both hardware and software.
By the bye, the purpose of this post isn't to show off how clever I am or brag about Leandra. This is one part the kind of tutorial I wish I'd had when I was first starting out, and I hope that it helps somebody wrap their mind around some of the more obscure aspects of system administration. This post is also one part cheatsheet, both for me and for anyone out there in a similar situation who needs to get something fixed in a hurry, without a whole lot of trial and error. If deep geek porn isn't your thing, feel free to close the tab; I don't mind (but keep it in mind if you know anyone who might need it later).