numpy.linalg.norm(x) == numpy.linalg.norm(x.T)where .Tstands for transposition. So it does not matter.
For instance:
>>> import numpy as np
>>> x = np.random.rand(5000, 2)
>>> x.shape
(5000, 2)
>>> x.T.shape
(2, 5000)
>>> np.linalg.norm(x)
57.82467111195578
>>> np.linalg.norm(x.T)
57.82467111195578
Edit:
Given that your vector is basically
x = [[real_1, training_1],
[real_2, training_2],
...
[real_n, training_n]]
then the Frobenius norm basically calculates
np.sqrt(np.sum(x**2))
Are you sure this is the right indicator. There are a number of other rules. Here 3
np.sum((x[:,0]**2 - x[:,1]**2) # N-dimensional euclidean norm
np.sqrt(np.sum(x[:,0]**2) + np.sum(x[:,1]**2)) # L^2 norm
np.sqrt(x[:,0].dot(x[:,1])) # sqrt dot product
source
share