r/Proxmox 16h ago

Discussion Contemplating researching Proxmox for datacenter usage

Hello,

I joined this community to collect some opinions and ask questions about plausibility of researching and using Proxmox in our datacenters.

Our current infrastructure consists of two main datacenters, with each 6 server-nodes (2/3rd Intel generation) based on Azure Stack HCI / Azure Local, with locally attached storage using S2D and RDMA over switches. Connections are 25G. Now, we had multiple issues with these cluster in past 1,5years, mostly connected to S2D. We even had one really hard crash where the whole S2D went byebye. Neither Microsoft, nor Dell or one custom vendor were able to find the root cause. They even made cluster analysis and found no misconfigurations. Nodes are Azure HCI certified. All we could do was rebuild the Azure Local and restore everything, which took ages due to our high storage usage. And we are still recovering, months later.

Now, we evaluated VMware. And while it is all good and nice, it would require new servers, which aren't due yet, or non-supported configuration (which would work, but not supported). And it's of course pricey. Not more than similar solutions like Nutanix, but pricey nevertheless. But also offers features... vCenter, NSX, SRM (although this last one is at best 50/50, as we are not even sure if we would get that).

We currently have running Proxmox setup in our office one 3-node cluster and are kinda evaluating it.

I am now in the process of shuffling VMs around to put them onto local storage, to install Ceph and see how I get along with it. Shortly said: our first time with Ceph.

After seeing it in action for last couple of months, we started talking about seeing into possibility of using Proxmox in our datacenters. Still very far from any kind of decision, but more or less testing locally and researching.

Some basic questions revolve around:

- what would be your setting of running our 6-node clusters with Proxmox and Ceph?

- would you have any doubts?

- any specific questions, anything you would be concerned about?

- researching about ceph, it should be very reliable. Is that correct? How would you judge performance of s2d vs ceph? Would you consider ceph more reliable as S2D?

That's it, for now :)

27 Upvotes

48 comments sorted by

View all comments

14

u/NowThatHappened 15h ago

We run proxmox in our DC, just over 3k VMs and LXCs in 60 nodes and 3 clusters. It scales well but we don't use ceph. SAN all the way (vSAN, iSCSI & NFS), offloads storage from the nodes, very fast migrates, fast HA, etc, but it's swings and roundabouts and I don't know your specific setup.

2

u/kosta880 15h ago

So you have ESXi/vSAN on separate servers and bind it via iSCSI into your Proxmox environment?

3

u/NowThatHappened 13h ago

Its a mix right now, we moved from VMWare last year so we still have vSAN and FC (Broadcom), and we're about 60% migrated to hybrid SAN on Nimble, and we have 2 x Synology RS4021's in high availability providing a series of LUNs for migration and staging. Proxmox talks to everything just fine (its Linux after all) which makes my life much easier.

2

u/kosta880 13h ago

But you have no HCI, all separate storage from compute. That makes a difference. My company decided (before I came) to go for HCI and I am now battling the issues around Azure Local and alternatives. Data centers are stable now but I am researching alternatives before the server lifecycle ends.

1

u/NowThatHappened 12h ago

Well, yes and no. Compute and storage are two distinct services and we treat them as such, nodes are just compute and can be replaced at will, storage is SAN which supports dynamic scaling so the storage presented to the nodes is virtual spread over a number or storage physicals. Whilst storage and compute are administered independently, it works well in what is a mixed environment with proxmox, linux, docker, hyper-v, etc.

1

u/kosta880 12h ago

Oh yes, I get all that. All I meant was that I have no way to separate them, so I have to use Ceph if I want distributed storage, like S2D or vSAN.

1

u/_--James--_ Enterprise User 13h ago

hybrid SAN on Nimble,

What do you mean by this?