Help me build: Plex media server

I used to have to spread my library over drives but with the newer bigger capacities I have everything on one 14TB drive. For unraid do you have a bigger library or just looking for redundancy in case of failure?

Well, say goodbye to UnRaid. I don't know Linux and it was somewhat of an exercise in frustration It was basically to have one drive do everything for me. Every time I added a new drive, I'd have to go in and switch things to the new drive. It was kind of annoying so looking for a "set it and forget it" type of creation.

When I installed UnRaid I bought a new SSD so I could keep my Windows 10 setup just in case. I was able to boot that back up tonight and get it running in no time. All I had to do was change my media paths to F:/TV and F:/Movies. Everything resides in one of those 2 directories.

I did find a program called StableBit Drivepool. On their main page, it says "A state of the art disk pooling application with file duplication.". Giving that a try. I have various drives all connected to my F: drive, a 35.5TB usable space array.

If you use disks that are the same size, Linux is really really good at joining them into a larger array with the 'mdraid' facility. It's a little tricky to get it set up, but it then gives you redundancy... depending on the RAID level you choose, up to three drives can fail without losing data. You lose space when you do this.... for up to X drives failing, you lose the space of X drives. (that is, on an ten-drive RAID6, which can lose any two drives, you get the usable space of eight drives.) The Linux engine for this is extremely fast, it is best-of-class software.

But if you want to just stick disks on willy-nilly and have them all be different sizes, mdraid is no help. You can use Unraid, but I don't believe that offers any redundancy, so if you lose a disk, you will (at least) lose all the data that was on that disk. You might lose more, I'm not real familiar with how it works.

Be careful of simple agglomeration, because you're pinning all your hopes on more and more spindles, and if there's one thing you can be absolutely certain of, those disks will eventually fail. The more disks you're using, the worse the problem becomes. That's what the redundancy/RAID thing is for.

It also functions as a very weak form of backup. RAID is really about preventing downtime from drive failure, but it accidentally protects you against some forms of catastrophic data loss. It does nothing, however, to protect against fat-finger error, so backups are still important. It also can't protect you if you lose more than X disks, which can happen more often than you'd think.

edit: also note that you can hot-expand an mdraid volume, but your new disk needs to be at least as large as the old ones. The total size of any array is (smallest disk * number of disks) - redundancy level, and you cannot add a new disk that is even one byte less than the old ones. It's often a good idea to partition the drives as 10 megs or so less than they actually are, so that it's easy to replace them with another brand later on.

It takes many hours to silver in a new disk properly. Changing the number of disks in a RAID basically means that every sector on every drive has to be rewritten.

If you want more reliability, the ZFS filesystem is available on Ubuntu, and has an extraordinarily good reputation. It has a laser focus on data integrity, writing checksums with every file to be sure that you'll always know if a disk gives you bad data. (most other filesystems just accept the bad data and never notice.) If you have at least one redundant disk in a given pool, it will fix the bad data, but if a file has an unfixable error, ZFS won't give you *any* of it. It will just give you an "invalid data" error. Backups are extra-important for critical files on ZFS, but you'll never have an invisibly corrupt file.

It is, however, an alien filesystem that does everything its own way, so learning it takes real investment. It comes from Sparc, and it doesn't feel very Linuxy. It works very well, but comes with a different set of expectations, and feels very weird compared to standard Linux drive management.

ZFS is incapable of changing the number of disks in a RAID. If you want to expand a ZFS filesystem, you can either add another set of disks in a separate RAID, or you can replace each disk, one at a time, with a larger one. After the array resilvers, you can replace the next drive, and so on. When the last disk is replaced with a bigger one, wham, your filesystem instantly grows to fit the new disks. The overall process can take a week or more.