I would expect it to be a feature already present in LwJGL, all that should be needed is flagging the window as DPI aware before creation, and providing a scale option inside the graphics settings if high resolution is a problem.
On window resize, firing a buffer destroy-recreate, you simply grab the true window resolution (should be returned by the window size call when the DPI awareness is flagged correctly), and factor it by the scale option, and feed that to the buffer creation.
You could also provide an auto mode in the scale slider, of which the behaviour would be to grab the active scaling of the window at the time of resize/migration (a resize event is usually fired on scale change).
Note that you cannot expect this to be an automatic feature inside LwJGL, the application using said library must still handle the given modes correctly, otherwise if its left to auto DPI aware then you'll get mac users complaining that their small integrated chips are chugging at 5k.
If the feature is apparently missing in LwJGL, or not functioning as intended, please make that clear and don't close it as a "wontfix", that's not how you solve problems. I can potentially fix the bug (if present) in LwJGL if need be (note, only for windows and linux however, I do not own a high DPI mac).
Additional feature note; the scaling option could be allowed to go as low as .5x, which would provide a simple super-sampling feature.
... really...?
You could easily fix the behaviour, but keeping it in and making it look worse than previous versions is instead the intended goal...? why...?
... ok the resolution is definitely of itself a separate issue, fast and fancy are also affected.
MC-191780
A very important note I will add to this; OpenGL effectively has a hard limit as to how many instructions can be passed through per second. If you add more little details to grass for example, that involves one or two more GL calls, the maximum framerate will be dropped accordingly.
If you wish to optimise against this problem, you'll want to implement chunk batching. If you batched every 2x2x2 sets of chunks (32x32x32 blocks) for example, you can cut your total draw calls by as much as 8 times. The one caveat to doing this however is chunk re-bakes (updates) will take extra time, so it's advisable that the batching be done for distant chunks, or ones that otherwise are not updating frequently.
An additional suggestion would be skinned meshes for entities, so that each entity can be rendered with a single set of draw calls instead of one set for every box they are made of. This would be incredibly simple to implement here as you'd only need a single bone per box.
Just curious, have any of you tried with the FOV set to exactly 60 degrees? Most games use 60 as it's the best sweet-spot between warped and telescoped rendering, I verified in minecraft and 60 has almost no angle distortion, while not looking like your head is attached to a plank being swung around that is 50deg and lower.
An extra game-dev rule is limiting directional movements with cameras, in that they should only be moving in mostly one direction or rotation at a time. If the camera for example were to fly outwards from the target, while also strafing to one side, that can cause spontaneous nausea in some people. In the case of minecraft, I would suggest an option to be added to snap motion to fixed rates, as opposed to having constant momentum, especially for creative mode. The user can then simply refrain from using WASD and the mouse at the exact same time, and not have to worry about the character still sliding around when they're looking around.
One other thing to note is looking down can be a sure-fire way of triggering nausea in a lot of people, especially if you're way up high and slam the view straight down. The effect can also be witnessed in real life when looking down from a high-rise for example, in a sense it could be confused with acrophobia, but they're not the same.
... I spoke too soon it seems. 1.16 has introduced a new bug, where when the game is set to fabulous mode, the resolution is nolonger correct and lower than that of the display's. In essence, the window size is not detected correctly, and a lower buffer resolution is used, resulting in severe pixelation.
This is also technically a separate bug, as it's a fault specifically in the window size handling/detection code.
Render quality has improved marginally, we're not seeing many block edge artefacts, however anisotropic filtering and multisample FBO's are still not present.
Item rendering still looks awful (1.14 and up bug I think), but that's technically a separate issue I'm pretty sure.
[media]The use of legacy render-to-texture causes hard pixelation, screen-door effect and prevents any form of driver render features from functioning, such as super-sampling, multi-sampling, morphological filtering, colour correction, buffer control and so on. AF appears like it has been fixed in .15, however the lack of AA means it's not exactly visible, nor are any AF sliders present yet.
Prior additional artifacts included poor performance and extreme pixelation in the inventory, where the FBO then had to be disabled to fix these. Though since the FBO option was removed, these may nolonger be an issue.
Current other artefact issues include items in-hand and in the world rendering with offset mesh coordinates, worsened when running on a 4k display or higher. Ice still renders all 6 sides randomly, while also Z-fighting, and rendering large amounts of chunks and/or items on-screen still causes immense performance degradation due to the use of legacy rendering.
So, this would mean that this issue needs to be addressed by the LWJGL team, not by Mojang?
No, it's on mojang's side to update the render code. As far as I know LWJGL has been updated to support current and future GL iterations.
The issue was confirmed long ago, and was primarily caused by the implementation of the unfinished FBO pipeline. The prior moderator forgot to change the ticket status back to confirmed.
The various issues are any number of graphical artefacts that occur as a result of using legacy GL with legacy render-to-texture, both of which are seldom supported, and block modern graphical features from functioning.
LWJGL only needs to support GL 3.2 to 4+ for it to not be an issue.
The primary changes needed to the pipeline are non-interfering spritemap generation (fixes edge sampling issues), and stripping out the legacy render-to-texture and implementing proper multi-sample frame buffer passes. Additionally, GL_QUAD calls should be stripped out and replaced with instancing, or proper triangle generation.
Once the above changes are made, a pair of new sliders for hardware Anti-Aliasing and Anisotropic-Filtering can be added, and the mip-map code can be stripped out.
nope, no different to what it was before, other than the FBO option being stripped entirely...
If the full GL pipe is open now, I may just fix this myself...
related to MCL-1657? I've been getting invalidated sessions lately when switching between PC's, only been an issue since the latest updates...
(deep regression of the above linked issue)
AA is not a post-process, nor filter, it's performed with the initial rendering stage.
In C/C++ the methods in question are 'glTexImage2DMultisample' to generate the framebuffer (as opposed to glTexImage2D) and you perform a glBlit to either a secondary buffer for post-process or to the backbuffer, depending on what render effects you want and what data needs to be passed through.
In this case it's likely just;
> render everything to multisample buffer
> blit to midway buffer (of the same res as the window
> render post-process effects from mid buffer to back buffer
The code never changed so the behaviour is still exactly the same, additionally on 4k displays the lack of AA results in dithering (ie; a block at the same distance could be either 55 or 56 pixels high with different minute FP values).
Confirmed no change to the render pipe since 1.7/1.8, all bugs I know from them still exist in 15w46a, including poor draw performance overall.
Has the pipeline been updated to use the modern multi-sample objects? If not then the main issue still remains, ie; FBOs must be disabled for AA to be functional.
a sub issue seems that FRAPs (and I imagine others) don't recognize mc1.8 as a valid 3D application, if something in the window system was changed recently then that would be a likely cause, and likely been done somewhat incorrectly, for example rendering to a window texture over rendering to a window GL context is a bad idea as it more than likely causes unneeded buffer copy-backs (ie; VRAM > RAM > VRAM each frame, which is very bad and slow).
back to the main issue, the common modern 'correct' deferred render is to use Multisample framebuffers, they behave identically to standard ones, however their real size is determined driver-level and they don't have an internal depth buffer, the later is not a problem as if you need it in post-processing you can always define your own depth buffer via the use of gl_FragCoord or more optimally just passing through your positions as another vector4, or alternatively just pass the depth of the position as a float, either of these should work and you can change the graph if you desire (default GL depths is logarithmic-like), only note there is gl_FragCoord last I checked was being retired and may not be in later contexts.
that being said, I cant necessarily explain the whole gist of modern GLSL rendering as I have my own projects to attend to, but this is only the very basic stuff and you can find various info all over the place.
confirmed for;
290X - 14.4
540M - 331.65
reported for many other processors on both AMD and Nvidia.
@Kumasasa that's for the old intel chips, this is (presumably, what the reporter says) a HD4000, found in the 4k i series CPUs.
try updating, reinstalling or completely removing the graphics drivers and let windows install a suitable version, I can confirm that HD3000 works.
@ampolive makes sense since that was created not too long after this one