![]() When you plan to upgrade your production one, snapshot the test-vm, try to upgrade the bootloader/DSM there - if it fails, just revert to your old snapshot and report and the forum / wait until it gets fixed. Be sure to _also_ install a second test-VM where you do not use passthrough drives but rather images - install the same version and clone/snapshot/backup this VM conf and fix the "size" parameter of the drive to match the actual size: ls -la vm-100-disk-1.qcow2Ĥ. This is especially important if you have multiple XPE systems on the same network.ġ.1 Set the name, and make sure to note down the VM ID (if it is your first VM, it should be 100). Edit the loader (if needed) to your liking - MAC address, serial number, etc. I recommend Jun's Loader, specifically 1.02a2 at the time of the writing of this guide.Ġ. Follow the guide to create the bridged network interface! I won't bore you with the details, there are enough guides about installing it to make me think twice before I write an (n+1)th version.Ī working install of Proxmox 5.0 - it can be 4.4 too, but I only tested this on 5.0. Luckily there's a workaround, since it is based on Debian, one can use the Debian netinst image, create a very basic system, and install Proxmox on top. Never used it, and during installation I had some issues with the bootloader (both on 4.4 release and 5.0 beta). XenServer simply refused to boot in UEFI mode, ESXi does not support my network adapter, or my built-in SATA controller (B250 chipset), Microsoft's Hyper-V server has issues if you do not have a domain server on the network, also as soon as the display output goes off, the device drops network connection. The rest of the hard drives would use passthrough of course. On the other hand, I can install the measly 2-3GB hypervisor and all the tools on the NVMe SSD, and have the rest mounted as a VMDK (or any other virtual disk file). On one hand it makes installation easier, upgrading also, and I don't need a display connected (since most hypervisors provide a VNC connection to the guest OS). Synology is silent about this, even though some have requested it on their forums (although they do not have a model with M.2 NVMe connector yet, they have some models with full sized x16 or x4 PCIe ports, which can be used with an adapter card for NVMe). And even with a kernel driver, the Storage Manager will not see it as an SSD, as an HDD, or anything. The main reason for me to move to a hypervisor was because Synology has not yet introduced NVMe support AT ALL. After fiddling with it for a day, and getting it to work, I thought it would be nice to share the knowledge, especially since "officially" only ESXi is supported, and ESXi is picky about supporting stuff. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
March 2023
Categories |