mojira.dev

David Chamberlin

Assigned

No issues.

Reported

MC-161230 xCenter and zCenter not set correctly for given filled_map Invalid MC-148513 SIGSEGVs on loading client Duplicate MC-148039 Chunk loading causes excessive tick durations at high render distances Awaiting Response MC-140335 server-port value is ignored, server jar always uses 25565 Duplicate MC-134900 server.properties generator-settings for level-type FLAT not implemented; property is stored in ignored flat_world_options NBT Fixed MC-133377 Shift-F3-I does not copy NBT data Container contents Works As Intended MC-132632 Can not climb 1 block height if player is in water 5 or more blocks from water source Fixed MC-129425 Water Walking Duplicate MC-123055 Villagers not breeding, recognizing houses or farming Fixed MC-123035 Exit horse while jumping in water pauses state until ai change Duplicate MC-98900 Upside down corner stairs incorrectly rotated. Invalid MC-97053 Head yaw not animated, Head pitch and body rotation not animated without movement. Duplicate MC-97033 Villagers open/close doors behind glass block wall. Cannot Reproduce MC-97022 Cannot break block with any Sword item in Creative Duplicate MC-82759 Player visually falls through/around solid block on login spawn with air below Awaiting Response

Comments

In regards to bee pathfinding, I've noticed that bees are getting sucked down into a whirlpool bubble column even if in the air above it.  If I create a bubble column with magma blocks, then spawn the bee in a glass chute above it, with an open top, the bee is sucked down. If I create a glass chute over ground, it flies out the top most always.  When the bee is sucked down, there is still a chance that it can escape, but most of the time it drowns.  That shouldn't happen, since the bubble column should not be affecting it in the air.

 

Squids are not being pulled down properly in water over magma blocks in the same manner as other water mobs.  All other water mobs, cod, salmon, pufferfish, tropical fish, turtles and dolphins are all pulled down into magma blocks properly.  This is still an issue as of 1.15.2 release.

Hard to say if they are related.   HPET is per system and higher precision, but uses more resources, but allows for better sync when usign multiple cores.  Whereas TSC is per CPU and faster, and synchronizes across all cores on Nehalem+ CPUs. 

Might want to try both HPET standalone, and TSC with HPET as a backup. 

If your core is spending a lot of time servicing interrupts, then changing these could help.

Also, not sure if it will help, but there is a difference in running a server in SMP vs. Pre-Empt, or an actual Realtime kernel.  As these will also affect interrupts and their priorities.

Did you try yet with -pre6 ?  I've heard other reports that from pre5 to pre6, some had an improvement in performance.

 

 

But the more relevant concern is the whether the availableProcessors() method on that particular JVM under your VM is reporting the correct value (all the time).  Because it had been reported that on some VMs it could report only one core, or might report the hardware number of cores, not the number allocated to the VM.

The bug, I'm referring to on SystemUtils is that it can never reach the code to use the direct executor service over the fork join pool in the case of a single core, because the available cores minus one is being clamped to a value 1 to 7.  At least, it was that way in 1.14.3.

If I recall correctly, it would stall on 0% when creating the spawn areas, waiting for the main thread to finish because it couldn't allocate the worker thread in the pool on a single core.

Is the overloaded hypervisor handling of futexes the root cause?  Perhaps its a case where the hypervisor is overloaded and would be able to somewhat cope on bare metal, and the root cause being that the way the code is handling its executor service is using an incorrect allocation of threads, based on the cores reported by the Runtime's availableProcessors() method, which in turn may be incorrect on hypervisors and VMs.

The latter, has been a known issue in the past for Java 6 through 9.  Coupled with the bug in SystemUtils for incorrectly allocating the direct executor service when there is only one core.  So, it's quite possible that the futexes are being overloaded because the system is time-slicing to handle the threads that exceed the number of cores, whether caused by the bug, or by the Runtime reporting the full system cores, rather than the VM specific cores.

 

Also confirming for 1.14 pre-5 on Debian

[media]

Well I tried to upload the log...

File "hs_err_pid6878.log" was not uploaded

Jira could not attach the file as there was a missing token. Please try attaching the file again.

OK, this is probably a duplicate of MC-148461

Additional notes about my test, there was a significant delay between typing /gamemode creative at the beginning of the game and that command thread actually being executed.  Perhaps something is blocking on load and save and may be related to why the game doesn't render the chunks periodically.  High tps issues, maybe those are also related to how the threading works now, since they seemed to be more pronounced when I changed video settings, and then loaded more chunks.  Feels like the threading can't keep up with the workload occasionally.

I was testing with the integrated server in single-player with the default settings.  Of course the user should not have to 'tweak' JVM settings to get reasonable performance.

 

This one is same hardware running real world test of highest rendering setting on 1.13.2.  Notice that although its lazily loading the chunks, it doesn't seem to have the tps issues.

https://www.spawnchunk.com/downloads/2019-04-12%2011-10-05.mp4

 

Georgeii, I didn't seem to have any issues reproducing this issue, but then, maybe I don't have the latest and greatest hardware. 

I can show you a video that was made that I used for the screen shots I posted on my other bug report.  It shows with the debug screen open and profiling graphs what happens.  It's a fairly large file, so I'll leave it posted on my server for a while until you can view or download.

https://www.spawnchunk.com/downloads/2019-04-12%2010-28-49.mp4

If you want to directly email/PM me for the details, please feel free to do so since release date is approaching and would like to get to the bottom of this.

I could not confirm that the bug as written in the description is "Worlds will not load and/or if they seldom do, they load chunks very slowly." still exists in 1.14 Pre-Release 2.

I'm not sure that "Chunk loading/generation is slower than 1.13.2, as it appears to be loading chunks lazily, but have not done any quantative analysis, so my comments would be subjective.

But what I can state about 1.14 Pre-Release after testing is that when I'm flying around in creative on 1.14 Pre-Release 2, chunks might not get loaded around the player consistently like on 1.13.2, and it appears to fall behind in displaying them.  And certainly, after a while with render distance set at a high value like 32, the tick durations get higher and don't come back down immediately.  These problems "still exist in 1.14 Pre-Release 2" and are usability showstoppers.

I've experienced a similar issue.  Single player or multiplayer, when looking around, the pitch/yaw glitches to a different position.  But when I tested this issue in full-screen as you have, the issue goes away.  I'm using 19w07a, on Debian 9/4.17 kernel/openbox (not KDE).  But it seems, my problem turned out to be related to the dock (docky) interfering and stealing the mouse focus when not full screen. 

getting this on 1.8.0_191

 

The sad thing is that this issue is so critical that it is limiting other useful testing on multiplayer.

Just a note, I seem to be getting significant improvement after changing my JVM from OpenJDK 10 to Oracle 8 (1.8.0_191) on the server and tweaking several JVM settings. Might be useful to note that its using close to 4G for the server. 

Arisa, not all bugs create crash reports.  If you were a "smart" bot you would know this and not automatically re-assign my reported issue as "resolved".