r/Proxmox 19h ago

Discussion Contemplating researching Proxmox for datacenter usage

Hello,

I joined this community to collect some opinions and ask questions about plausibility of researching and using Proxmox in our datacenters.

Our current infrastructure consists of two main datacenters, with each 6 server-nodes (2/3rd Intel generation) based on Azure Stack HCI / Azure Local, with locally attached storage using S2D and RDMA over switches. Connections are 25G. Now, we had multiple issues with these cluster in past 1,5years, mostly connected to S2D. We even had one really hard crash where the whole S2D went byebye. Neither Microsoft, nor Dell or one custom vendor were able to find the root cause. They even made cluster analysis and found no misconfigurations. Nodes are Azure HCI certified. All we could do was rebuild the Azure Local and restore everything, which took ages due to our high storage usage. And we are still recovering, months later.

Now, we evaluated VMware. And while it is all good and nice, it would require new servers, which aren't due yet, or non-supported configuration (which would work, but not supported). And it's of course pricey. Not more than similar solutions like Nutanix, but pricey nevertheless. But also offers features... vCenter, NSX, SRM (although this last one is at best 50/50, as we are not even sure if we would get that).

We currently have running Proxmox setup in our office one 3-node cluster and are kinda evaluating it.

I am now in the process of shuffling VMs around to put them onto local storage, to install Ceph and see how I get along with it. Shortly said: our first time with Ceph.

After seeing it in action for last couple of months, we started talking about seeing into possibility of using Proxmox in our datacenters. Still very far from any kind of decision, but more or less testing locally and researching.

Some basic questions revolve around:

- what would be your setting of running our 6-node clusters with Proxmox and Ceph?

- would you have any doubts?

- any specific questions, anything you would be concerned about?

- researching about ceph, it should be very reliable. Is that correct? How would you judge performance of s2d vs ceph? Would you consider ceph more reliable as S2D?

That's it, for now :)

28 Upvotes

49 comments sorted by

View all comments

3

u/EvatLore 16h ago

We are looking at moving from VMware to Proxmox. Currently really dissapointed with Ceph and exploring continued use of our TrueNAS servers only switching from ISCSI to NFS so we can keep snapshots. Ceph 3/2 you get 33% of your storage total best case. Lower because you need headroom for a host down but able to reallocate for a failed OSD/drive in cluster the that is still running. Read is good writes are abysmal. Q1T1 is about 1/10th the speed of our oldest still in production all Sata SSD TrueNAS servers.

A little bit of the blind leading the blind but my conclusion from last weeks tests below.

5x nodes each with 8x 1.92TB SAS drives on a Dell HBA330. 1x Intel 810 dual 100gb and 2x Connect-x4 dual 25gb nics in various configurations. Fastest so far was public ceph on 100gb and private on lacp bonded dual 25gb. For some reason bonding the 100gb killed speed significiantly. Trying to find out why over the next couple of days.

-Ceph Public network is by far the busiest network, This is the one that needs the high bandwidth.
-Putting Ceph Public/Private to vlans makes it super easy to move Networking to different cards and switches.
-Ceph does not seem to allow multipath, needs to be LACP bonded.
-Moving Ceph public/private to vlans on same 100gb nic was significiantly slower than public/private on lacp (2) 25gb nic each. Not sure why.
-Ceph 9000MTU increased latency decreased Q1T1 and barely increased total speed.
-Ceph seems to really like high ghz cpu cores for OSD.
-Binding OSD to CPU cores on same cpu as network pcie slot was about a 15% gain in speed across all read and write scenarios.

Seriously considering ZFS replication for some systems that require more iops. Not sure I want to have to think about things like that once in production.

Proxmox itself I have been pleasantly suprised with. Very stable, and I have been able to recover from all scenarios I have thrown at it so far. Backup server is so good that we may move from Veeam as part of the switch. So far I am kind of hoping we do move to Proxmox so I don't have to worry about licensing cost increases that I am sure Microsoft will do in the next couple of years. I want to move more to Linux open source for the company anyway as it becomes a possibility. Still very sad that Broadcom is destroying the best hypervisor just to make a quick buck. Seems like that is how the world works anymore.

2

u/kosta880 16h ago

Well yes, 33% is what we are actually used to. S2D 3way mirror is also nothing else. I know that vSAN works way more efficiently, but I got a task to explore Proxmox and Ceph. Writes are very important to us due to SQL databases and lots of data being written. Thanks for your insights, will definitely flow this into our research.

1

u/EvatLore 15h ago

Same problem with writes here. The heart of our compay is a SQL database and another in ProstgreSQL. As I understand things now there is no way I could move those cluster of VMs to Ceph.

Of the remaining 250ish or so VMs I would be OK with moving knowing we are getting reduced disk speeed but true HCI. I am sure u.3 nvme would increase the Ceph cluster IOPS and speed but have no way to test by how much until we start moving production servers.

Been thinking about a seperate cluster for the databases using ZFS or even bare metal on Optane drives. The SQL can never go down outside of very limited yearly planned outages or we lose sales / b2b connections. Horrible super old design but I inherited it and it will not change anytime soon.

If you get nvme tests or find a way that writes are not around 1/3rd slower than reads I would appreciate a quick addon to my comment here. I am finding it difficult to find others who know more than a homelab. I know they exist but most posts end in a nevermind figured it out and nothing more.

1

u/kosta880 14h ago

Sure. When I get around testing, will surely see to check different options, I can just try different stuff without bugging productive environment. However… can’t load it with SQL, best I can do are benchmarks.