Normalize / translate ndarray - Numpy / Python

Is there an easy way to normalize ndarray (all values ​​between 0.0, 1.0)?

For example, I have a matrix such as:

a = [[1., 2., 3.],
     [4., 5., 6.],
     [7., 8., 9.]]

So far I get the maximum value with

max(max(p[1:]) for p in a)
a / p

Also, I think numpy might have a method for doing this on a single line, this doesn't work if my data looks something like this:

b = [[-1., -2., -3.],
     [-4., -5., -6.],
     [-7., -8., 0.]]

Which gives an error caused by zero division.

, , , 1. , , 9 1 ( ) 0 ( ) 1 ( , ), , , , numpy .

numpy?

.

+4
1

np.ptp 1 ( ) np.min, :

new_arr = (a - a.min())/np.ptp(a)

:

>>> a = np.array([[-1., 0, 1], [0, 2, 1]])
>>> np.ptp(a)
3.0
>>> a
array([[-1.,  0.,  1.],
       [ 0.,  2.,  1.]])
>>> (a - a.min())/np.ptp(a)
array([[ 0.        ,  0.33333333,  0.66666667],
       [ 0.33333333,  1.        ,  0.66666667]])

, , a . .

1 IIRC, np.ptp np.max np.min. , ptp np.min , .

+7

Source: https://habr.com/ru/post/1532162/


All Articles