Calculation of percentage error by comparing two arrays

I have some data in two numpy arrays.

a = [1, 2, 3, 4, 5, 6, 7] b = [1, 2, 3, 5, 5, 6, 7] 

I say that array a is my computed result, and array b is the true values ​​of the result. I want to calculate the percentage of error in my result. Now I can go through two arrays and compare them 0 if the values ​​match and 1 for a mismatch, then add them, divide by common values ​​and calculate the percentage error.

Is there any possible quick and elegant way to do this?

+6
source share
2 answers

First, calculate the positions where a and b differ with a != b , then find the average of these values:

 >>> import numpy as np >>> a = np.array([1, 2, 3, 4, 5, 6, 7]) >>> b = np.array([1, 2, 3, 5, 5, 6, 7]) >>> error = np.mean( a != b ) >>> error 0.14285714285714285 
+12
source

Something along the lines of:

 >>> a = np.array([1, 2, 3, 5, 5, 6, 7]) >>> b = np.array([1, 2, 3, 4, 5, 6, 7]) >>> (a != b).sum()/float(a.size) 0.14285714285714285 

Refresh . I am wondering why this is a little faster:

 a = np.random.randint(4, size=1000) b = np.random.randint(4, size=1000) timeit('from __main__ import a, b; (a != b).sum()/float(a.size)', number=10000) # 0.42409151163039496 timeit('from __main__ import a, b, np; np.mean(a != b)', number=10000) # 0.5342614773662717 
+4
source

Source: https://habr.com/ru/post/959388/


All Articles