buedi

joined 8 months ago
[–] buedi@feddit.org 1 points 3 days ago (1 children)

Thank you very much. I spent another two hours yesterday reading up on that and creating other VMs and Templates, but I was not able yet to attach the Boot disk to a SCSI controller and make it boot. I would really liked to see if this change would bring it on-par with Proxmox (I wonder now what the defaults for Proxmox are), but even then, it would still be much slower than with Hyper-V or XCP-ng. If I find time, I will look into this again.

[–] buedi@feddit.org 1 points 3 days ago

I am neither working professionally in that field. To answer your question: Of course I would use whatever gives me the best performance. Why it is set like this is beyond my knowledge. What you basically do in Apache Cloudstack when you do not have a Template yet is: You upload an ISO and in this process you have to tell ACS what it is (Windows Server 2022, Ubuntu 24 etc.). From my understanding, those pre-defined OS you can select and "attach" to an ISO seem to include the specifics for when you create a new Instance (VM) in ACS. And it seems to set the Controller to SATA. Why? I do not know. I tried to pick another OS (I think it was called Windows SCSI), but in the end it ended up still being a VM with the disks bound to the SATA controller, despite the VM having an additional SCSI controller that was not attached to anything.

This can probably be fixed on the commandline, but I was not able to figure this out yesterday when I had a bit spare time to tinker with it again. I would like to see if this makes a big difference in that specific workload.

[–] buedi@feddit.org 2 points 4 days ago

I just can't figure out how to create a VM in ACS with SCSI controllers. I am able to add a SCSI controller to the VM, but the Boot Disk is always connected to the SATA controller. I tried to follow this thread (https://lists.apache.org/thread/op2fvgpcfcbd5r434g16f5rw8y83ng8k) and create a Template, and I am sure I am doing something wrong, but I just cannot figure it out :-(

[–] buedi@feddit.org 3 points 4 days ago

I had a rough start with XCP-ng too. One issue I had was the NIC in my OptiPlex, which worked... but was super slow. So the initial installation of the XO VM (to manage XCP-ng) took over an hour. After using a USB NIC with another Realtek Chip, Networking was no issue anymore.

For management, Xen-Orchestra can be self-built and it is quite easy and works mostly without any additional knowledge / work if you know the right tools. Tom Lawrence posted a Video I followed and building my own XO is now quite easy and quick (sorry for being a YT link): https://www.youtube.com/watch?v=fuS7tSOxcSo

[–] buedi@feddit.org 11 points 4 days ago (2 children)

Sure, ESXi would have been interesting. I thought about that, but I did not test it because it is not interesting to me anymore from a business perspective. And I am not keen of using it in my Homelab, so I left that out and use that time to do something relaxing. It's my holiday right now :-)

[–] buedi@feddit.org 5 points 4 days ago* (last edited 4 days ago) (6 children)

That's a very good question. The testsystem is running Apache Cloudstack with KVM at the moment and I have yet to figure out how to see which Disk / Controller mode the VM is using. I will dig a bit to see if I can find out. Would be interesting if it is not SCSI to re-run the tests.

Edit: I did a 'virsh dumpxml ' and the Disk Part looks like this:

  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/mnt/0b89f7ac-67a7-3790-9f49-ad66af4319c5/8d68ee83-940d-4b68-8b28-3cc952b45cb6' index='2'/>
      <backingStore/>
      <target dev='sda' bus='sata'/>
      <serial>8d68ee83940d4b688b28</serial>
      <alias name='sata0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

It is SATA... now I need to figure out how to change that configuration ;-)

[–] buedi@feddit.org 3 points 4 days ago

It would be cool to see how linux centric workloads behave on those Hypervisors. Juuust in case you plan to invest some time into that ;-)

[–] buedi@feddit.org 11 points 4 days ago

Yes, it is Windows centric because that is where the workload is based on I need to run. It would be cool to see a similar comparison with a workload under Linux that puts strain on CPU, Memory and Disk.

[–] buedi@feddit.org 10 points 4 days ago

Oooh, that explains it! I wondered what is going on. Thank you very much. And thank you for working on XCP-ng, it is a fantastic platform :-)

 

I spent a few days comparing various Hypervisors under the same workload and on the same hardware. This is a very specific workload and results might be different when testing oher workloads.

I wanted to share it here, because many of us run very modest Hardware and getting the most out of it is probably something others are interested in, too. I wanted to share it also because maybe someone finds a flaw in the configurations I ran, which might boost things up.

If you do not want to go to the post / read all of that, the very quick summary is, that XCP-ng was the quickest and KVM the slowest. There is also a summary at the bottom of the post with some graphs if that interests you. For everyone else who reads the whole post, I hope it gives some useful insights for your self-hosting endeavours.