I tried longhorn on my homelab cluster. I'll admit it's possible that I did something wrong, but I managed to somehow get it into a state where it seemed my volumes got permanently corrupted. At the very least I couldn't figure out how to get my volumes working again.
When restoring from backup I went with Rook (which is a wrapper on ceph) instead and it's been much more stable, even able to recover (albeit with some manual intervention needed) from a total node hardware failure.
positisop [3 hidden]5 mins ago
Longhorn is a poorly implemented distributed storage layer. You are better off with Ceph.
willbeddow [3 hidden]5 mins ago
have not used longhorn, but we are currently in the process of migrating off of ceph after an extremely painful relationship with it. Ceph has fundamental design flaws (like the way it handles subtree pinning) that, IMO, make more modern distributed filesystems very useful. SeaweedFS is also cool, and for high performance use cases, weka is expensive but good.
__turbobrew__ [3 hidden]5 mins ago
Are there any distributed POSIX filesystems which don’t suck? I think part of the issue is that POSIX compliant filesystem just doesn’t scale, and you are just seeing that?
q3k [3 hidden]5 mins ago
That sounds more like a CephFS issue than a Ceph issue.
(a lot of us distrust distributed 'POSIX-like' filesystems for good reasons)
yupyupyups [3 hidden]5 mins ago
I've heard Ceph is expensive to run. But maybe that's not true?
keeperofdakeys [3 hidden]5 mins ago
Ceph overheads aren't that large for a small cluster, but they grow as you add more hosts, drives, and more storage. Probably the main gotcha is that you're (ideally) writing your data three times on different machines, which is going to lead to a large overhead compared with local storage.
Most resource requirements for Ceph assume you're going for a decently sized cluster, not something homelab sized.
jauntywundrkind [3 hidden]5 mins ago
I'm only just wading in, after years of intent. I don't feel like Ceph is particularly demanding. It does want a decent amount of ram. 1GB each for monitor, manager, and metadata, up to 16GB total for larger clusters, according to docs. But then each disk's OSD defaults to 4gb, which can add up fast!! And some users can use more. 10Gbe is recommended and more is better here but that seems not unique to ceph: syncing storage will want bandwidth. https://docs.ceph.com/en/octopus/start/hardware-recommendati...
xyzzy123 [3 hidden]5 mins ago
For me it was the ram for the OSDs, 1GB per 1TB but ideally more for SSDs...
> All you need is a machine, virtual or physical, with two CPU cores, 4GB RAM, and at least two or three disks (plus one disk for the operating system).
studmuffin650 [3 hidden]5 mins ago
Where I work, we primarily use Ceph for the a K8s Native Filesystem. Though we still use OpenEBS for block store and are actively watching OpenEBS mayastor
__turbobrew__ [3 hidden]5 mins ago
I looked into mayastor and the NVME-of stuff is interesting, but it is so so so far behind ceph when it comes to stability and features.
One ceph has the next generation crimson OSD with seastore I believe it should close a lot of the performance gaps with ceph.
coopreme [3 hidden]5 mins ago
Go with Ceph… a little more of a learning curve but overall better.
dilyevsky [3 hidden]5 mins ago
Anyone knows what's the story with NVMEoF/SPDK support these days? A couple years ago Mayastor/OpenEBS was running laps around Longhorn on every performance metrics big time, not sure if anything changed there...
d3Xt3r [3 hidden]5 mins ago
Longhorn was the codename for Windows Vista... so not a great choice of a name (IMO).
onionisafruit [3 hidden]5 mins ago
Longhorn is a fine name, and it doesn't matter if somebody else used it 20+ years ago
weinzierl [3 hidden]5 mins ago
By that logic Titanic would be a fine name too.
NewJazz [3 hidden]5 mins ago
Hmm, maybe just shorten to Titan?
bigstrat2003 [3 hidden]5 mins ago
I mean, I think it would be. Superstition about naming is silly.
fineallaround [3 hidden]5 mins ago
As a codename, no less.
0-0
What a stupid thing to complain about.
privatelypublic [3 hidden]5 mins ago
Even complaining about Vista raises eyebrows. It had two huge issues: overactive UAC, and Microsoft handing "Vista Certified" to basically anybody who asked. (Frequently to machines that would barely run XP pre-SP1.)
Most of the complaints can be reduced to one of those.
Yes- I hand wave away a lot of other things: because they were required for a huge step towards a decently secure and stable OS.
gdbsjjdn [3 hidden]5 mins ago
I did this was going to be about the Vista and how some of the FS stuff that got cut was prescient. "This old thing that didn't work was ahead of its' time" is a whole genre of post (ex. Itanium)
tracker1 [3 hidden]5 mins ago
I remembered the Windows Vista reference as soon as I saw the name. That said, I don't think it's a big deal.
When restoring from backup I went with Rook (which is a wrapper on ceph) instead and it's been much more stable, even able to recover (albeit with some manual intervention needed) from a total node hardware failure.
(a lot of us distrust distributed 'POSIX-like' filesystems for good reasons)
Most resource requirements for Ceph assume you're going for a decently sized cluster, not something homelab sized.
> All you need is a machine, virtual or physical, with two CPU cores, 4GB RAM, and at least two or three disks (plus one disk for the operating system).
What a stupid thing to complain about.
Most of the complaints can be reduced to one of those.
Yes- I hand wave away a lot of other things: because they were required for a huge step towards a decently secure and stable OS.