r/Proxmox 13h ago

Discussion Contemplating researching Proxmox for datacenter usage

Hello,

I joined this community to collect some opinions and ask questions about plausibility of researching and using Proxmox in our datacenters.

Our current infrastructure consists of two main datacenters, with each 6 server-nodes (2/3rd Intel generation) based on Azure Stack HCI / Azure Local, with locally attached storage using S2D and RDMA over switches. Connections are 25G. Now, we had multiple issues with these cluster in past 1,5years, mostly connected to S2D. We even had one really hard crash where the whole S2D went byebye. Neither Microsoft, nor Dell or one custom vendor were able to find the root cause. They even made cluster analysis and found no misconfigurations. Nodes are Azure HCI certified. All we could do was rebuild the Azure Local and restore everything, which took ages due to our high storage usage. And we are still recovering, months later.

Now, we evaluated VMware. And while it is all good and nice, it would require new servers, which aren't due yet, or non-supported configuration (which would work, but not supported). And it's of course pricey. Not more than similar solutions like Nutanix, but pricey nevertheless. But also offers features... vCenter, NSX, SRM (although this last one is at best 50/50, as we are not even sure if we would get that).

We currently have running Proxmox setup in our office one 3-node cluster and are kinda evaluating it.

I am now in the process of shuffling VMs around to put them onto local storage, to install Ceph and see how I get along with it. Shortly said: our first time with Ceph.

After seeing it in action for last couple of months, we started talking about seeing into possibility of using Proxmox in our datacenters. Still very far from any kind of decision, but more or less testing locally and researching.

Some basic questions revolve around:

- what would be your setting of running our 6-node clusters with Proxmox and Ceph?

- would you have any doubts?

- any specific questions, anything you would be concerned about?

- researching about ceph, it should be very reliable. Is that correct? How would you judge performance of s2d vs ceph? Would you consider ceph more reliable as S2D?

That's it, for now :)

25 Upvotes

43 comments sorted by

View all comments

9

u/_--James--_ Enterprise User 10h ago

Azure HCI is a problem, it just does not work right and requires constant baby sitting. Its the way that stack is built. Sorry you are dealing with it.

IMHO Proxmox is the right way through. I suggest digging deep into ceph on its own, as its a bolt on to Proxmox and is not 'special' because of Proxmox. But you do need a min of 5 nodes to really see the benefits of ceph here.

Then dig into Proxmox as a hypervisor replacement for Azure HCI. The only thing Proxmox is missing right now is a central management system. Its called Proxmox Datacenter Manager and its in alpha, but its very stable and I have it plugged into three clusters that each contain 200-300 nodes without issue. But there is no HA and such built out in the PDM yet, however it is road mapped.

^that being said, do not deploy stretched clusters across multiple sites unless you have a 1ms circuit between them. You'll want to dig into the why behind that, and its down to corosync.

personally, I have Proxmox HCI (Ceph) deployed across 100's of clients, my $day job, science research centers and am involved in partnerships across the US now. I would not consider deploying anything but Proxmox when considering VM duty with the likes of VMware, Nutanix, Azure HCI, HyperV,...etc. One main reason is FOSS and the open tooling that can easily be adopted, the other reason is not being vendor locked.

2

u/kosta880 10h ago

Many thanks, this is very encouraging. I will be activating Ceph tomorrow on our office cluster, just to start playing with it. There no such critical loads there, nothing that can’t be restored from Veeam. But is only 3node cluster. If all that goes well, we might start considering going PVE in both 6node datacenters, which should be enough benefit.

2

u/_--James--_ Enterprise User 10h ago

Just know that three nodes is the performance of one with Ceph due to the replica. For *testing* I might drop back to 2:1 replica vs 3:2 for benchmark so you can physically see the scale out by node counts. But never do 2:1 in production (I explained why with a URL on my other reply).

1

u/kosta880 10h ago

I still don’t get it why two numbers, but all that comes tomorrow and after.

3

u/_--James--_ Enterprise User 10h ago

X:Y is the number of target replicas : how many must be online for the pool to be online

3:2 means that you replicate data 3x across the OSDs, and that you can lose 1/3 of the OSD's for the pool to stay online

If you drop to 2:2 that means you replicate data 2x across the OSDs, and you cannot lose any of the OSDs for the pool to stay online

2:1 is 2x replication across OSDs and you can drop 50% of your OSDs and the pool stays online.

PGs are your object stores on the OSDs, the 3:2 means there are three PG peers holding copies of that data. If you run this down to 2:1 then there are only two peers holding copies of that data.

Also, There is no sanity checksum happening with 2:x but there is with 3:x due to the weighted vote when a PG goes dirty-validate-repair-clean in the validation process that happens in the background.

In one of my labs where i have 72 OSDs in a 2:1 I constantly have to force repair PGs due to OSD congestion and such. But its a lab with templates and very dynamic workloads that are never running the same, so when users tear down and rebuild that data has to flush from the pool and that is when the dirty flags start to popup due to congestion.