The time set command has loss of floating point precision when using large values. This can be seen with integers as well as decimals. The command converts values internally to single-precision float and also to 32-bit integers. The precision of the command appears to be on the order of 22 to 25 bits, consistent with the precision of single-precision floating point numbers.
The numbers are also converted internally to 32-bit signed integers. 24 bits of precision are not sufficient for 32-bit integers.
Examples that show this loss of precision:
time set 36524.25d → sets time to 876582016. Correct value is 876582000. Difference is 16.
time set 876582000 → sets time to 876582016. Correct value is 876582000. Difference is 16. This is the same as the previous example.
time set 54786375 → sets time to 54786376. Correct value is 54786375. Difference is 1.
time set 1122024000 → converts to 1122023936. Correct value is 1122024000. Difference is 64.
Comments 0
No comments.