This is a bit of a technical issue that is related to how minecraft is developed rather than an actual bug, and I didn't report this because a feature is broken but because resource leaks are bad and must be prevented at all times.
The issue
In net.minecraft.world.level.chunk.storage.RegionFile
, there is a resource leak, which I found after decompiling 20w51a. Have a look at the following two methods in RegionFile
:
@Nullable
private DataInputStream createChunkInputStream(ChunkPos pos,
byte versionId,
InputStream in)
throws IOException {
RegionFileVersion version = RegionFileVersion.fromId(versionId);
if(version == null) {
LOGGER.error(
"Chunk {} has invalid chunk stream version {}",
pos, Byte.valueOf(versionId));
// Oh no: 'in' never get's closed. Resource leak found!
// But in this case we would not even need to open the resource
// This malformed version can also be catched BEFORE
// calling 'createChunkInputStream', so that no resource has to be opened
// at all
return null;
} else {
return new DataInputStream(new BufferedInputStream(version.wrap(in)));
}
}
@Nullable
private DataInputStream createExternalChunkInputStream(ChunkPos pos, byte versionId) throws IOException {
Path extPath = getExternalChunkPath(pos);
if(!Files.isRegularFile(extPath)) {
LOGGER.error("External chunk path {} is not file", extPath);
return null;
} else {
return createChunkInputStream(
pos, versionId,
// v Newly opened input stream, which createChunkInputStream
// v doesn't close upon error!
Files.newInputStream(extPath));
}
}
If no region files have been corrupted, this resource leak does not happen, but at the point of corrupted region files multiple InputStream
s will be opened and not closed.
A possible fix
To help fixing the bug, here is a probable fix:
In getChunkDataInputStream
this check for the correct compression type can also be performed. I propose a few changes in that method:
@Nullable
public synchronized DataInputStream
getChunkDataInputStream(ChunkPos pos) throws IOException {
int off = getOffset(pos);
if(off == 0) {
return null;
} else {
int num = getSectorNumber(off);
int secs = getNumSectors(off);
int secBytes = secs * 4096;
ByteBuffer buf = ByteBuffer.allocate(secBytes);
file.read(buf, num * 4096L);
buf.flip();
if(buf.remaining() < 5) {
LOGGER.error(
"Chunk {} header is truncated: expected {} but read {}",
pos,
Integer.valueOf(secBytes),
Integer.valueOf(buf.remaining()));
return null;
} else {
int unpaddedBytes = buf.getInt();
byte header = buf.get();
// Instead of checking this in createChunkInputStream,
// check it before opening any resource here
byte version = getExternalChunkVersion(header);
if(!RegionFileVersion.isValidVersion(version)) {
LOGGER.error("Chunk {} has invalid chunk stream version {}",
pos, version);
return null;
}
if(unpaddedBytes == 0) {
LOGGER.warn("Chunk {} is allocated, but stream is missing", pos);
return null;
} else {
int payloadBytes = unpaddedBytes - 1;
if(isExternalStreamChunk(header)) {
if(payloadBytes != 0) {
LOGGER.warn(
"Chunk has both internal and external streams");
}
// Use 'version' here for ease, we already computed this
return createExternalChunkInputStream(pos, version);
} else if(payloadBytes > buf.remaining()) {
LOGGER.error(
"Chunk {} stream is truncated: expected {} but read {}",
pos, Integer.valueOf(payloadBytes),
Integer.valueOf(buf.remaining()));
return null;
} else if(payloadBytes < 0) {
LOGGER.error(
"Declared size {} of chunk {} is negative",
Integer.valueOf(unpaddedBytes), pos);
return null;
} else {
// Use 'version' here for ease, we already computed this
return createChunkInputStream(
pos, version,
createStream(buf, payloadBytes));
}
}
}
}
}
(all sources here are mapped in official mappings and decompiled with Fernflower, variables have been named by me for clarity in the code samples)
Comments 0
No comments.