How to use memory_profiler (python module) using class methods?

I want to calculate the time and memory usage of a class method. I did not find a solution for this from the window (are there such modules?), And I decided to use timeit to profile time and memory_usage from the memory_profiler module.

I ran into the problem of profiling methods with memory_profiler . I tried different options and none of them worked.

When I try to use partial from functools , I get this error:

 File "/usr/lib/python2.7/site-packages/memory_profiler.py", line 126, in memory_usage aspec = inspect.getargspec(f) File "/usr/lib64/python2.7/inspect.py", line 815, in getargspec raise TypeError('{!r} is not a Python function'.format(func)) TypeError: <functools.partial object at 0x252da48> is not a Python function 

By the way, exactly the same approach works fine with the timeit function.

When I try to use lambda , like I got this error:

 File "/usr/lib/python2.7/site-packages/memory_profiler.py", line 141, in memory_usage ret = parent_conn.recv() IOError: [Errno 4] Interrupted system call 

How can I handle class methods using memory_profiler?

PS: I have a memory profiler (0.26) (installed with pip).

UPD: This is actually a mistake. You can check the status here: https://github.com/fabianp/memory_profiler/issues/47

+6
source share
1 answer

If you want to see the memory changes allocated for the Python VM, you can use psutil . Here is a simple decorator using psuil that prints the change in memory:

 import functools import os import psutil def print_memory(fn): def wrapper(*args, **kwargs): process = psutil.Process(os.getpid()) start_rss, start_vms = process.get_memory_info() try: return fn(*args, **kwargs) finally: end_rss, end_vms = process.get_memory_info() print((end_rss - start_rss), (end_vms - start_vms)) return wrapper @print_memory def f(): s = 'a'*100 

In all likelihood, the output you see will not say any changes in memory. This is because small allocations for the Python virtual machine may not require more memory from the OS. If you select a large array, you will see something else:

 import numpy @print_memory def f(): return numpy.zeros((512,512)) 

Here you should see some changes in memory.

If you want to know how much memory is used by each allocated object, the only tool I know about is heapy

 In [1]: from guppy import hpy; hp=hpy() In [2]: h = hp.heap() In [3]: h Out[3]: Partition of a set of 120931 objects. Total size = 17595552 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 57849 48 6355504 36 6355504 36 str 1 29117 24 2535608 14 8891112 51 tuple 2 394 0 1299952 7 10191064 58 dict of module 3 1476 1 1288416 7 11479480 65 dict (no owner) 4 7683 6 983424 6 12462904 71 types.CodeType 5 7560 6 907200 5 13370104 76 function 6 858 1 770464 4 14140568 80 type 7 858 1 756336 4 14896904 85 dict of type 8 272 0 293504 2 15190408 86 dict of class 9 304 0 215064 1 15405472 88 unicode <501 more rows. Type eg '_.more' to view.> 

I have not used it for a long time, so I recommend experimenting and reading the documentation. Note that for an application that uses a large amount of memory, it can very slowly calculate this information.

+5
source

Source: https://habr.com/ru/post/945185/


All Articles