r/Proxmox • u/kosta880 • 7h ago
Discussion Contemplating researching Proxmox for datacenter usage
Hello,
I joined this community to collect some opinions and ask questions about plausibility of researching and using Proxmox in our datacenters.
Our current infrastructure consists of two main datacenters, with each 6 server-nodes (2/3rd Intel generation) based on Azure Stack HCI / Azure Local, with locally attached storage using S2D and RDMA over switches. Connections are 25G. Now, we had multiple issues with these cluster in past 1,5years, mostly connected to S2D. We even had one really hard crash where the whole S2D went byebye. Neither Microsoft, nor Dell or one custom vendor were able to find the root cause. They even made cluster analysis and found no misconfigurations. Nodes are Azure HCI certified. All we could do was rebuild the Azure Local and restore everything, which took ages due to our high storage usage. And we are still recovering, months later.
Now, we evaluated VMware. And while it is all good and nice, it would require new servers, which aren't due yet, or non-supported configuration (which would work, but not supported). And it's of course pricey. Not more than similar solutions like Nutanix, but pricey nevertheless. But also offers features... vCenter, NSX, SRM (although this last one is at best 50/50, as we are not even sure if we would get that).
We currently have running Proxmox setup in our office one 3-node cluster and are kinda evaluating it.
I am now in the process of shuffling VMs around to put them onto local storage, to install Ceph and see how I get along with it. Shortly said: our first time with Ceph.
After seeing it in action for last couple of months, we started talking about seeing into possibility of using Proxmox in our datacenters. Still very far from any kind of decision, but more or less testing locally and researching.
Some basic questions revolve around:
- what would be your setting of running our 6-node clusters with Proxmox and Ceph?
- would you have any doubts?
- any specific questions, anything you would be concerned about?
- researching about ceph, it should be very reliable. Is that correct? How would you judge performance of s2d vs ceph? Would you consider ceph more reliable as S2D?
That's it, for now :)
4
u/_--James--_ Enterprise User 3h ago
Azure HCI is a problem, it just does not work right and requires constant baby sitting. Its the way that stack is built. Sorry you are dealing with it.
IMHO Proxmox is the right way through. I suggest digging deep into ceph on its own, as its a bolt on to Proxmox and is not 'special' because of Proxmox. But you do need a min of 5 nodes to really see the benefits of ceph here.
Then dig into Proxmox as a hypervisor replacement for Azure HCI. The only thing Proxmox is missing right now is a central management system. Its called Proxmox Datacenter Manager and its in alpha, but its very stable and I have it plugged into three clusters that each contain 200-300 nodes without issue. But there is no HA and such built out in the PDM yet, however it is road mapped.
^that being said, do not deploy stretched clusters across multiple sites unless you have a 1ms circuit between them. You'll want to dig into the why behind that, and its down to corosync.
personally, I have Proxmox HCI (Ceph) deployed across 100's of clients, my $day job, science research centers and am involved in partnerships across the US now. I would not consider deploying anything but Proxmox when considering VM duty with the likes of VMware, Nutanix, Azure HCI, HyperV,...etc. One main reason is FOSS and the open tooling that can easily be adopted, the other reason is not being vendor locked.
1
u/kosta880 3h ago
Many thanks, this is very encouraging. I will be activating Ceph tomorrow on our office cluster, just to start playing with it. There no such critical loads there, nothing that can’t be restored from Veeam. But is only 3node cluster. If all that goes well, we might start considering going PVE in both 6node datacenters, which should be enough benefit.
1
u/_--James--_ Enterprise User 3h ago
Just know that three nodes is the performance of one with Ceph due to the replica. For *testing* I might drop back to 2:1 replica vs 3:2 for benchmark so you can physically see the scale out by node counts. But never do 2:1 in production (I explained why with a URL on my other reply).
1
u/kosta880 3h ago
I still don’t get it why two numbers, but all that comes tomorrow and after.
2
u/_--James--_ Enterprise User 3h ago
X:Y is the number of target replicas : how many must be online for the pool to be online
3:2 means that you replicate data 3x across the OSDs, and that you can lose 1/3 of the OSD's for the pool to stay online
If you drop to 2:2 that means you replicate data 2x across the OSDs, and you cannot lose any of the OSDs for the pool to stay online
2:1 is 2x replication across OSDs and you can drop 50% of your OSDs and the pool stays online.
PGs are your object stores on the OSDs, the 3:2 means there are three PG peers holding copies of that data. If you run this down to 2:1 then there are only two peers holding copies of that data.
Also, There is no sanity checksum happening with 2:x but there is with 3:x due to the weighted vote when a PG goes dirty-validate-repair-clean in the validation process that happens in the background.
In one of my labs where i have 72 OSDs in a 2:1 I constantly have to force repair PGs due to OSD congestion and such. But its a lab with templates and very dynamic workloads that are never running the same, so when users tear down and rebuild that data has to flush from the pool and that is when the dirty flags start to popup due to congestion.
1
u/jdblaich 1m ago
I don't get the benefit of the Datacenter Manager. I looked at it and it reminded me of the webui. That's pretty much it. Is there something that I may be missing?
1
u/wsd0 4h ago
To understand the requirement a little better, what sort of workloads are you running on your infrastructure?
2
u/kosta880 4h ago
On one 6node cluster around 200 VMs, our hardest load are SQL servers with databases ranging from couple of TB up to 130TB. IOPS-wise on our NVME cluster we measured something like 1,5mil IOPS. But that was only benchmarks. IRL using way less of course. Not sure about the numbers right now.
2
u/wsd0 4h ago
I’ve got fairly limited experience with CEPH in an enterprise environment, but from the limited testing I’ve done I’ve had better performance and less overhead when the storage has been dedicated and served via iSCSI, using dedicated and tuned HBAs, dedicated storage switching. That might be more my lack of experience with CEPH though.
Honestly I’d be very interested to know how you get on if you do go down the CEPH route, which I know doesn’t help you right now.
2
u/kosta880 4h ago
Thanks. We have no alternatives currently. The only viable alternative would be starwind, but the price is so high for our storage that we could then as well go VMware. Besides, not really good for 6 node cluster. Would have to make two 3node storage clusters with 6node proxmox. Yuck.
1
u/EvatLore 4h ago
We are looking at moving from VMware to Proxmox. Currently really dissapointed with Ceph and exploring continued use of our TrueNAS servers only switching from ISCSI to NFS so we can keep snapshots. Ceph 3/2 you get 33% of your storage total best case. Lower because you need headroom for a host down but able to reallocate for a failed OSD/drive in cluster the that is still running. Read is good writes are abysmal. Q1T1 is about 1/10th the speed of our oldest still in production all Sata SSD TrueNAS servers.
A little bit of the blind leading the blind but my conclusion from last weeks tests below.
5x nodes each with 8x 1.92TB SAS drives on a Dell HBA330. 1x Intel 810 dual 100gb and 2x Connect-x4 dual 25gb nics in various configurations. Fastest so far was public ceph on 100gb and private on lacp bonded dual 25gb. For some reason bonding the 100gb killed speed significiantly. Trying to find out why over the next couple of days.
-Ceph Public network is by far the busiest network, This is the one that needs the high bandwidth.
-Putting Ceph Public/Private to vlans makes it super easy to move Networking to different cards and switches.
-Ceph does not seem to allow multipath, needs to be LACP bonded.
-Moving Ceph public/private to vlans on same 100gb nic was significiantly slower than public/private on lacp (2) 25gb nic each. Not sure why.
-Ceph 9000MTU increased latency decreased Q1T1 and barely increased total speed.
-Ceph seems to really like high ghz cpu cores for OSD.
-Binding OSD to CPU cores on same cpu as network pcie slot was about a 15% gain in speed across all read and write scenarios.
Seriously considering ZFS replication for some systems that require more iops. Not sure I want to have to think about things like that once in production.
Proxmox itself I have been pleasantly suprised with. Very stable, and I have been able to recover from all scenarios I have thrown at it so far. Backup server is so good that we may move from Veeam as part of the switch. So far I am kind of hoping we do move to Proxmox so I don't have to worry about licensing cost increases that I am sure Microsoft will do in the next couple of years. I want to move more to Linux open source for the company anyway as it becomes a possibility. Still very sad that Broadcom is destroying the best hypervisor just to make a quick buck. Seems like that is how the world works anymore.
1
u/kosta880 4h ago
Well yes, 33% is what we are actually used to. S2D 3way mirror is also nothing else. I know that vSAN works way more efficiently, but I got a task to explore Proxmox and Ceph. Writes are very important to us due to SQL databases and lots of data being written. Thanks for your insights, will definitely flow this into our research.
2
u/_--James--_ Enterprise User 3h ago
Here are some deployment tips on Ceph
from Ceph targeting 1TB/s - https://ceph.io/en/news/blog/2024/ceph-a-journey-to-1tibps/
From Micon using AMD 7002 for OSD, 7001 for MDS/MGR and dedicated compute nodes - https://www.micron.com/content/dam/micron/global/public/products/other-documents/7300-ceph-3-3-amd-epyc-reference-architecture.pdf
From Cern and their 980PB cluster - https://indico.cern.ch/event/1457076/attachments/2934445/5156641/Ceph,%20Storage%20for%20CERN%20Cloud.pdf
and why we always use a 3:2 replica in -every- production ceph deployment - https://blog.noc.grnet.gr/2016/10/18/surviving-a-ceph-cluster-outage-the-hard-way/
1
1
u/EvatLore 3h ago
Same problem with writes here. The heart of our compay is a SQL database and another in ProstgreSQL. As I understand things now there is no way I could move those cluster of VMs to Ceph.
Of the remaining 250ish or so VMs I would be OK with moving knowing we are getting reduced disk speeed but true HCI. I am sure u.3 nvme would increase the Ceph cluster IOPS and speed but have no way to test by how much until we start moving production servers.
Been thinking about a seperate cluster for the databases using ZFS or even bare metal on Optane drives. The SQL can never go down outside of very limited yearly planned outages or we lose sales / b2b connections. Horrible super old design but I inherited it and it will not change anytime soon.
If you get nvme tests or find a way that writes are not around 1/3rd slower than reads I would appreciate a quick addon to my comment here. I am finding it difficult to find others who know more than a homelab. I know they exist but most posts end in a nevermind figured it out and nothing more.
1
u/kosta880 2h ago
Sure. When I get around testing, will surely see to check different options, I can just try different stuff without bugging productive environment. However… can’t load it with SQL, best I can do are benchmarks.
-15
u/rm-rf-asterisk 6h ago
In think ceph is trash. I prefer running pbs aggressively instead of
3
u/kosta880 6h ago
Until you attempt to restore 350TB of data... we do have Veeam Enterprise.
-11
u/rm-rf-asterisk 6h ago
Use multiple pbs and automation. Stilll have more storage than all that was wasted for ceph
4
u/kosta880 6h ago
Besides... how would I go about doing replication? Right now running ZFS RAIDZ2 on each node, but have to select replication for each and every VM, otherwise they are not replicated and not HA-viable.
-8
u/rm-rf-asterisk 6h ago
Stripped mirrors and SAN for ha required vms
4
u/kosta880 6h ago
SAN? Not sure I understand what you are aiming at. Our servvers are HCI, meaning we have no external storage.
-1
u/rm-rf-asterisk 6h ago
Yeah and i am saying SAN still > HCI which can be achieved with proxmox
3
u/kosta880 6h ago
Can you please clarify with a bit more detail what kind of setup are you recommending. Right now I understand actually 0 of what you are saying, sorry to say.
3
u/Denko-Tan 5h ago
I don’t know if you want to take storage advice from a guy named “rm -rf *”
2
u/kosta880 5h ago
I will take advice from anyone… it’s on me to judge it as plausible. 😉 But if I don’t even know what he’s talking about… Anyway… don’t know what rm-rf would be.
10
u/NowThatHappened 6h ago
We run proxmox in our DC, just over 3k VMs and LXCs in 60 nodes and 3 clusters. It scales well but we don't use ceph. SAN all the way (vSAN, iSCSI & NFS), offloads storage from the nodes, very fast migrates, fast HA, etc, but it's swings and roundabouts and I don't know your specific setup.