• teeweehoo@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If I’m understanding your screenshot correctly, then I’m guessing VMWare Workstation is only emulating a 1gbit/s network connection (good old e1000). To verify give us a “lspci” and “ethtool …” from inside the VM.

    Any reason you don’t want to use Hyper-V? It should provide a better experience than VirtualBox or VMWare Workstaiton.

  • Choyneese@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Maybe the Workstation VM is running on E-cores? This has been a problem with it since Intel 12th gen launched, performance can be all over the place.

  • UltimateBachson@alien.top
    cake
    OPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Win11, Z690, 13700k, 32GB DDR4, 2.5Gbit Realtek Ethernet NIC, Samsung 980 PRO NVMe.

    I just installed Debian 12 on both VMWare Workstation (10.1.2.60) and VirtualBox (10.1.2.108), both using 4 “CPUs”, 4GB RAM, both using a bridged network to my one and only Realtek NIC. Everything else is default (even though I also tried with “vmxnet3” on VMWare, no difference). Running iperf3 on my Windows Host (10.1.2.15) and testing from both VMs produces these results.

    I host my media server as a VM on VMWare and I don’t understand why Virtualbox is this much faster.

    Hyper-V is disabled, Open-VM-Tools installed on Workstation VM.

    If you could help me trying to figure out what’s appening or pointing me in the right direction to debug this I’d be grateful.

    • cbugk@alien.top
      cake
      B
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’m more familiar with this on linux, but the answer is bridged networking. It must be, as you have surpassed 2.5Gbit theoretical limit of the NIC, and that is before packet overheads. Basically your networking never leaves the “virtual switch” onto real NIC, so it can be a lot faster.

      At one instance, I had seen around 35-40Gbps over iperf3 on a beefy PC with around 100GB free RAM and a proper Gen4 NVMe SSD. I think it was due to free RAM being able to accomodate any packet sent/recieved on cache, could not be replicated on prod probably.

      So, test again with another LAN-connected machine (2.5Gbps if posaible) and you should be bound by the laws of physics once again ; )

      • UltimateBachson@alien.top
        cake
        OPB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Yes, testing iperf from those VMs to another LAN machine machine unsurprisingly never exceeds 1Gbps (my other LAN machine doesn’t support 2.5), but VMWare is still slower. Maybe it’s due to Workstation using the 13700k e-cores, as someone else commented.

        The thing is since my Win11 PC is hosting those 2 VMs I’d expect VM-HOST/HOST-VM network transfers to be faster, even using NAT instead of bridged, yes, it does improve the transfer speeds. but VMWare is still behind Vbox, even with vmxnet3 instead of e1000.

        Anyway, thanks for the reply, it might as well come down to being a “Windows thing”, I never had these inconsistencies on a proxmox host, for example.

        • cbugk@alien.top
          cake
          B
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Ok a bit of trivia out of my chest first [source][1]:

          • Para-virtualization is guest being aware to call high level command to hypervisor, rather than calling hardware commands.
          • Hardware assisted virtualization: Silicon having instructions to fasten virtualization.

          I thought those two were inseperable for some time, turns out they were not.

          This seems irrelevant, but VMXNET3 could be paravirtualized but not hardware assisted by virtue (emulated E1000)

          While [PVRDMA][2] (allowing shared memory between VMs which have PVRDMA) does similar to what Ilinux bridges does by defauit (IPTABLES forwarding without emulation). Hardware assisted para-virtualization by virtue :D

          This has the potential to run above 1Gbit, could you try?

          1 2

  • DellR610@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Have you tried iperf from the vm to another physical machine on the network? Since iperf uses ram there may be some funky resource management happening between VMware and windows 11 with that much RAM activity happening at the same time. You may also want to disable any dynamic ram settings and make it as static as possible.

  • insecurityguy@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Likely a tcp windows size issue (for a single tcp session as the VM switch might introduce delay), Windows has a fairly low default tcpwindow size set by default.

    Try iperf with 10 simultaneous sessions. If the total value looks better, increase the tcpwindows size on the Windows machine

  • Zero_Karma_Guy@alien.top
    cake
    B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    We were getting much better performance out of proxmox than anything else in testing. You might consider having a development box dedicated to proxmox.