Floating point errors occur only when there are operations whose mathematical results cannot be accurately represented in a floating point. Errors are precisely identified; they are not random or arbitrary, therefore identical results are produced when identical operations are performed.
In the first example, you assign "27.64" to $ num1 and $ num2. Here's the operation: the parser should interpret the character string "27.64" and produce a floating-point result. The parser probably produces a floating point number that is closest to 27.64. (As a hexadecimal floating-point digit, this number is 0x1.ba3d70a3d70a4p + 4. The part before โpโ is the hexadecimal digit with the fractional part. โP4โ means multiplication by 2 4. As the decimal digit, this is 27.6400000000000005684341886080801486968994140625.) And It produces the same number of cases, so a comparison of $ num1 and $ num2 indicates that they are equal to each other, although none of them are equal to 27.64 (since 27.64 cannot be exactly represented in a floating point).
In your second example, the floating-point number that is closest to the value of 27.60 is the same as the floating-point number that is closest to the value of 27.6, since two digits represent the same value. So, again, you get the same results.
In your third example, the values โโof two digits are far apart, so you get different floating point numbers, and a comparison shows that they are unequal.
In your fourth example, all values โโare accurately represented in floating point, so there is no error. 25, 12.50 and 12.5 are all small multiple powers of two (includes powers with a negative exponent, such as .5 = 2 -1 within the range of the floating point type, therefore, in addition, the sum of 12.50 and 12.5 is exactly representable therefore there is no rounding error when adding them, so all the results are accurate, and the comparison shows that the sum is 25.
Problems arise when people expect the same results from two different calculations, which will have the same mathematical result. A classic example is comparing ".3" with ".1 +.2". Converting the digit โ.3โ to a floating point gives the closest represented value, which is 0x1.333333333333333p-2 (0.29999999999999999988897769753748434595763683319091796875), slightly lower .3. Converting ".1" to a floating point gives the closest represented value, which is 0x1.999999999999ap-4 (0.1000000000000000055511151231257827021181583404541015625), slightly higher .1. Converting ".2" to a floating point gives the closest represented value, which is 0x1.999999999999ap-3 (0.200000000000000011102230246251565404236316680908203125), slightly higher .2. Adding the last two values โโgives a representable value closest to their sum, which is 0x1.333333333333434-2 (0.3000000000000000444089209850062616169452667236328125). As you can see, this amount is different from the amount obtained by converting โ.3โ to a floating point, so comparing them shows that they are unequal.