Why is the result different between starting a python interpreter and python code?

I made simple code on the python interpreter and ran it.

Python 3.5.3 (v3.5.3:1880cb95a742, Jan 16 2017, 16:02:32) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as np >>> x=np.array([0,1]) >>> w=np.array([0.5,0.5]) >>> b=-0.7 >>> np.sum(w*x)+b -0.19999999999999996 

The result is -0.1999999999999999996 strange. I think ... this is caused by the IEEE 754 rule. But when I try to run almost the same code on a file, the result is very different.

 import numpy as np x = np.array([0,1]) w = np.array([0.5,0.5]) b = -0.7 print(np.sum(w * x) + b) 

the result is -0.2. The IEEE 754 rule does not affect the result.

What is the difference between file-based and interpreter-based launches?

+5
source share
2 answers

The difference is related to how the output of the interpreter is derived.

The print function will try to use the __str__ object method, but the interpreter will use the __repr__ object.

If in the interpreter you wrote:

 ... z = np.sum(w*x)+b print(z) 

(this is what you do in your code), you will see -0.2 .

Similarly, if in the code you wrote:

 print(repr(np.sum(w * x) + b)) 

(what you do in the interpreter), you will see -0.19999999999999996

+9
source

I think the difference is that you use print() for your file-based code that converts the number, and in the case of the interpreter, you do not use print() , but ask the interpreter to show the result.

0
source

Source: https://habr.com/ru/post/1271658/


All Articles