mojira.dev

Michael Meier

Assigned

No issues.

Reported

No issues.

Comments

Can confirm that Robert C.'s simple workaround with using the kernel parameters "nohz=off highres=off" for the guest works just as well as my kernel recompiling on Ubuntu 18.04s 4.15 kernel. The effect of the fact that this does 250 Hz instead of 100 Hz seems to be absolutely neglegible on my system (<1% additional CPU utilization). Thanks, Robert C..

Dear Pony233, this is a Bugtracker and not a discussion forum, please do not abuse it as such. If I had found a workaround that is simpler than "compile your own guest kernel", I would have written that. That does not mean that it is impossible that simpler workarounds exist, just that I have not found any.

I've just been hit by this on 1.16.3 too, and wanted to share my findings, including a (non-trivial) workaround.

They seem to confirm what others in this thread already suspected: That minecraft calls a combination of sched_yield and futex with an insanely low timeout way too often, thereby burning A LOT of CPU for absolutely nothing.

If you look at the strace-output in the original report, you can see it calls _futex(FUTEX_WAIT_PRIVATE)_ with a timeout of 100000 nanoseconds. That's 0.0001 seconds. Or put otherwise, it requests no less than 10000 schedule-away-then-wakeup-events every second.

That is a giant waste of CPU even on real hardware, as context switches are expensive, and it will certainly prevent any meaningful powersaving by the CPU. The reason it has such a much more devastating effect on VMs is probably that the overhead there is much higher, and it's one of the few things that the virtualization-hardware-acceleration of current CPUs cannot handle in hardware.

Somewhere else I found the info that just running FreeBSD instead of Linux in the VM makes Minecraft behave an order of magnitude less CPU-hungry, and I can confirm that finding: Running the exact same minecraft-version and map in a VM on the same host, with the only change being FreeBSD as the OS inside the VM instead of Linux, makes the CPU-usage on the host drop from 100% to 5% when the server is idle.

My attempt at an explanation for this was that FreeBSD still uses a fixed-tick-based timing system, defaulting to 100 HZ, so even though Minecraft requests 10000 wakeups per seconds, it will never get more than around 100. Linux on the other hand nowadays by default is tickless and supports high precision timers, meaning you can schedule things in arbitrary intervals there, even something idiotic like 10000 wakeups per second. You can however still change that default, and that is what I did:

I recompiled the Ubuntu-4.15-Kernel, with one mayor change compared to their default-config: instead of tickless-mode, 100 HZ ticks were forced (CONFIG_HZ_PERIODIC=y, CONFIG_HZ_100=y and CONFIG_HIGH_RES_TIMERS not set). The result of this should be that, just like on FreeBSD, Minecraft should get something like 100 wakeups per second even though it requests 10000.

The results speak for themselves: When this kernel is booted in a VM running two minecraft-servers, the CPU-usage of that VM on the host drops from 200% (two cores fully utilized) to 5% when the minecraft-servers are idle. That is a HUGE improvement. It also proves how much CPU Minecraft wastes for doing absolutely nothing if the kernel lets it.

Please developers, fix this insane waste of CPU cycles for idle servers (not only) in virtual machines. Considering how many minecraft-servers run in VMs somewhere, fixing this problem would probably save quite a few tons of totally useless carbon-dioxide-emissions world-wide.