How to create a Numpy array from a large list of list-python

I have a list from a list with 1200 rows and 500,000 columns. How to convert it to a numpy array?

I read the solutions on the Bypass "Array too big" python error , but they do not help.

I tried putting them in a numpy array:

import random
import numpy as np
lol = [[random.uniform(0,1) for j in range(500000)] for i in range(1200)]
np.array(lol)

[Error]:

ValueError: array is too big.

Then I tried pandas:

import random
import pandas as pd
lol = [[random.uniform(0,1) for j in range(500000)] for i in range(1200)]
pd.lib.to_object_array(lol).astype(float)

[Error]:

ValueError: array is too big.

I also tried hdf5 as @askewchan suggested:

import h5py
filearray = h5py.File('project.data','w')
data = filearray.create_dataset('tocluster',(len(data),len(data[0])),dtype='f')
data[...] = data

[Error]:

    data[...] = data
  File "/usr/lib/python2.7/dist-packages/h5py/_hl/dataset.py", line 367, in __setitem__
    val = numpy.asarray(val, order='C')
  File "/usr/local/lib/python2.7/dist-packages/numpy/core/numeric.py", line 460, in asarray
    return array(a, dtype, copy=False, order=order)
  File "/usr/lib/python2.7/dist-packages/h5py/_hl/dataset.py", line 455, in __array__
    arr = numpy.empty(self.shape, dtype=self.dtype if dtype is None else dtype)
ValueError: array is too big.

This post shows that I can store a huge numpy array on a Python disk : how to store a multidimensional numpy array in PyTables? . But I can’t even get the list of the list into the numpy = (

+4
source share
5

, - (OS python), 32 , . 64 .

0

32 64- Python :

import random
import numpy as np
lol = [[random.uniform(0,1) for j in range(500000)] for i in range(1200)]
np.array(lol)

, , , . PyTables. , Array , , CArray ( ). :

import numpy as np
import tables as pt

# Create container
h5 = pt.open_file('myarray.h5', 'w')
filters = pt.Filters(complevel=6, complib='blosc')
carr = h5.create_carray('/', 'carray', atom=pt.Float32Atom(), shape=(1200, 500000), filters=filters)

# Fill the array
m, n = carr.shape
for j in xrange(m):
    carr[j,:] = np.random.randn(n) 

h5.close() # "myarray.h5" (~2.2 GB)

# Open file
h5 = pt.open_file('myarray.h5', 'r')
carr = h5.root.carray
# Display some numbers from array
print carr[973:975, :4]
print carr.dtype    

print carr.flavor, 'numpy'. carr , NumPy. , .

+3

h5py/hdf5:

import numpy as np
import h5py

lol = np.empty((1200, 5000)).tolist()

f = h5py.File('big.hdf5', 'w')
bd = f.create_dataset('big_dataset', (len(lol), len(lol[0])), dtype='f')
bd[...] = lol

, bd, , , :

In [14]: bd[0, 1:10]
Out[14]:
array([ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.], dtype=float32)

" " ( ).

abd = f.create_dataset('another_big_dataset', (len(lol), len(lol[0])), dtype='f')
abd[...] = lol
abd += 10

:

In [24]: abd[:3, :10]
Out[24]: 
array([[ 10.,  10.,  10.,  10.,  10.,  10.,  10.,  10.,  10.,  10.],
       [ 10.,  10.,  10.,  10.,  10.,  10.,  10.,  10.,  10.,  10.],
       [ 10.,  10.,  10.,  10.,  10.,  10.,  10.,  10.,  10.,  10.]], dtype=float32)

In [25]: bd[:3, :10]
Out[25]: 
array([[ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.]], dtype=float32)

, , , !

, , pytables, , h5py.

. :
Python Numpy
/ numpy, scipy SQLite HDF5

+2

dtype? .

import random
import numpy as np
lol = [[random.uniform(0,1) for j in range(500000)] for i in range(1200)]
ar = np.array(lol, dtype=np.float64)

- blaze. http://blaze.pydata.org/

import random
import blaze
lol = [[random.uniform(0,1) for j in range(500000)] for i in range(1200)]
ar = blaze.array(lol)
+1

:

lol = np.empty((1200,500000))
for i in range(lol.shape[0]):
    lol[i] = [random.uniform(0,1) for j in range(lol.shape[1])]

, , . , .

-2

Source: https://habr.com/ru/post/1532229/


All Articles