
I know that floating point operations in computer languages are inherently wrong! In fact every seasoned developer knows this; often with some painful memories of bug-hunting sessions in the past. But newcomers might be suprised by this behavior.
So, let me first explain what is wrong with the floating point numbers? Let's use Python as our guinea pig: