Alright - the example with 0.1 is well-understood. decimal fractions like 0.1 and
0.2 do not have a finite expansion as a binary fraction. The reason the noise is
so far out on the end is because JS uses
double precision floats so error is pretty small. But it is UNAVOIDABLE in binary
floating point arithmetic.
And that, folks, is why COBOL
(and PL/1) have support for fixed-point DECIMAL arithmetic, so that 10 dimes
will actually make a whole dollar without rounding.
floating point binary is wonderful for many things, but it is not without surprises awaiting the unread. without support for un-normalized values, a sufficiently small number added to a sufficiently large number will equal that large number.
the small number is called a “relative zero”
for the large number. this only happens when the dynamic range of values is very large, but there are science and engineering computations where this must
be considered quite carefully. This is not
limited to any particular programming language, but is a consequence of computer arithmetic, in the dark corners, being a very poor approximation of the Real Numbers. Airplanes have disintegrated in mid-air before this was sufficiently understood.
Best title ever.
I also think that example with a sum of two floating point numbers and example with octal numbers aren't good. JS has quite a lot uniquely bad design decisions.
Have you seen the horrifying example where you can change the value of an integer constant? as in
7 + 1 = 8
(perform obscene incantation)
7 + 1 = 42
(i did not include the incantation lest the
weak-willed hurt themselves)