Scipy function always returns a numpy array

I come across a scipy function that returns a numpy array no matter what is passed to it. In my application, I need to be able to pass only scalars and lists, so the only "problem" is that when I pass the scalar to the function, an array with one element is returned (when I would expect a scalar). Should I ignore this behavior or hack a function to ensure that when the scalar is passed, the scalar is returned?

Code example:

#! /usr/bin/env python import scipy import scipy.optimize from numpy import cos # This a some function we want to compute the inverse of def f(x): y = x + 2*cos(x) return y # Given y, this returns x such that f(x)=y def f_inverse(y): # This will be zero if f(x)=y def minimize_this(x): return yf(x) # A guess for the solution is required x_guess = y x_optimized = scipy.optimize.fsolve(minimize_this, x_guess) # THE PROBLEM COMES FROM HERE return x_optimized # If I call f_inverse with a list, a numpy array is returned print f_inverse([1.0, 2.0, 3.0]) print type( f_inverse([1.0, 2.0, 3.0]) ) # If I call f_inverse with a tuple, a numpy array is returned print f_inverse((1.0, 2.0, 3.0)) print type( f_inverse((1.0, 2.0, 3.0)) ) # If I call f_inverse with a scalar, a numpy array is returned print f_inverse(1.0) print type( f_inverse(1.0) ) # This is the behaviour I expected (scalar passed, scalar returned). # Adding [0] on the return value is a hackey solution (then thing would break if a list were actually passed). print f_inverse(1.0)[0] # <- bad solution print type( f_inverse(1.0)[0] ) 

On my system, the output is:

 [ 2.23872989 1.10914418 4.1187546 ] <type 'numpy.ndarray'> [ 2.23872989 1.10914418 4.1187546 ] <type 'numpy.ndarray'> [ 2.23872989] <type 'numpy.ndarray'> 2.23872989209 <type 'numpy.float64'> 

I am using SciPy 0.10.1 and Python 2.7.3 provided by MacPorts.

Decision

After reading the answers below, I settled on the following solution. Replace the return line in f_inverse with:

 if(type(y).__module__ == np.__name__): return x_optimized else: return type(y)(x_optimized) 

Here, return type(y)(x_optimized) causes the return type to be the same as the type with which the function was called. Unfortunately, this does not work if y is a numpy type, so if(type(y).__module__ == np.__name__) used to detect numpy types using the idea presented here and exclude them from type conversion.

+1
python numpy scipy duck-typing dynamic-typing
Sep 24
source share
4 answers

The first implementation line in scipy.optimize.fsolve :

x0 = array(x0, ndmin=1)

This means that your scalar will be converted to a 1-element sequence, and your sequence of 1 element will be practically unchanged.

The fact that it works is an implementation detail, and I would reorganize your code so that it does not allow sending a scalar to fsolve . I know this may seem inconsistent, but the function asks for ndarray for this argument, so you must respect the interface so that it is resistant to changes in the implementation. However, I see no problem with the conditional use of x_guess = array(y, ndmin=1) to convert scalars to ndarray into your wrapper function and, if necessary, convert the result back to a scalar.

Here is the relevant docstring part of the fsolve function:

 def fsolve(func, x0, args=(), fprime=None, full_output=0, col_deriv=0, xtol=1.49012e-8, maxfev=0, band=None, epsfcn=0.0, factor=100, diag=None): """ Find the roots of a function. Return the roots of the (non-linear) equations defined by ``func(x) = 0`` given a starting estimate. Parameters ---------- func : callable f(x, *args) A function that takes at least one (possibly vector) argument. x0 : ndarray The starting estimate for the roots of ``func(x) = 0``. ----SNIP---- Returns ------- x : ndarray The solution (or the result of the last iteration for an unsuccessful call). ----SNIP---- 
+3
Sep 24 '12 at 13:21
source share

Here you can convert Numpy arrays to lists and Numpy scans to Python scalars:

 >>> x = np.float32(42) >>> type(x) <type 'numpy.float32'> >>> x.tolist() 42.0 

In other words, the tolist method on np.ndarray handles scalars specifically.

It still leaves you with singleton lists, but they are fairly easy to handle in the usual way.

+2
Sep 24 '12 at 15:04
source share

I think the wims answer really talks about it mostly, but maybe this makes the differences clearer.

The scalar returned by numpy should with array[0] should be (almost?) Fully compatible with the standard python float:

 a = np.ones(2, dtype=float) isinstance(a[0], float) == True # even this is true. 

For the most part, an array of size 1 is already compatible with both a scalar and a list, although, for example, it is a mutable object, and float is not:

 a = np.ones(1, dtype=float) import math math.exp(a) # works # it is not isinstance though isinstance(a, float) == False # The 1-sized array works sometimes more like number: bool(np.zeros(1)) == bool(np.asscalar(np.zeros(1))) # While lists would be always True if they have more then one element. bool([0]) != bool(np.zeros(1)) # And being in place might create confusion: a = np.ones(1); c = a; c += 3 b = 1.; c = b; c += 3 a != b 

So, if the user does not need to know about this, I think the first is good, the second is dangerous.

You can also use np.asscalar(result) to convert an array of size 1 (any dimension) to the correct python scalar:

In [29]: type (np.asscalar (a [0])) Out [29]: float

If you want to make sure that there are no surprises for a user who does not need to know about numpy, you will need to at least get element 0 if a scalar has been passed. If the user needs to know, just the documentation is probably just as good.

+1
Sep 24
source share

As @wim noted, fsolve converts your scalar to an ndarray form (1,) and returns an array of form (1,) .

If you really want to get a scalar as output, you can try putting the following at the end of your function:

 if solution.size == 1: return solution.item() return solution 

(The item method copies an array element and returns a standard Python scalar)

+1
Sep 24 '12 at 14:29
source share



All Articles