Very fast get total folder size

I want to quickly find the total size of any folder using python.

import os from os.path import join, getsize, isfile, isdir, splitext def GetFolderSize(path): TotalSize = 0 for item in os.walk(path): for file in item[2]: try: TotalSize = TotalSize + getsize(join(item[0], file)) except: print("error with file: " + join(item[0], file)) return TotalSize print(float(GetFolderSize("C:\\")) /1024 /1024 /1024) 

So that a simple script that I wrote to get the total size of the folder takes about 60 seconds (+ -5 seconds). Using multiprocessing, I reduced the time to 23 seconds on a quad-core computer.

Using Windows Explorer, it takes only ~ 3 seconds (see Properties click-> to verify this). So is there a faster way to find the total size of a folder close to the speed with which Windows can do this?

Windows 7, python 2.6 (did searches, but most of the time people used a very similar method with mine) Thanks in advance.

+48
optimization python folder folders
Mar 21 '10 at 2:54
source share
3 answers

You are at a disadvantage.

Windows Explorer almost certainly uses FindFirstFile / FindNextFile to bypass the directory structure and collect size information (via lpFindFileData ) in a single pass, which essentially makes one system call for each file.

In this case, Python is unfortunately not your friend. In this way,

  • os.walk first calls os.listdir (which internally calls FindFirstFile / FindNextFile )
    • any additional system calls made from this point on may make you slower than Windows Explorer
  • os.walk then calls isdir for each file returned by os.listdir (which internally calls GetFileAttributesEx - or, before Win2k, a GetFileAttributes + FindFirstFile combo) to override whether to repeat or not
  • os.walk and os.listdir will perform additional memory allocation , operations with strings and arrays, etc., to fill their return value
  • you then call getsize for each file returned by os.walk (which again calls GetFileAttributesEx )

This is 3 times more system calls per file than Windows Explorer, plus memory allocation and overhead.

You can either use the Anurag solution, or try to call FindFirstFile / FindNextFile directly and recursively (which should be comparable to the performance of cygwin or another du32-port du -s some_directory .)

Refer to os.py for the implementation of os.walk , posixmodule.c for the implementation of listdir and win32_stat (both isdir and getsize .)

Note that Python os.walk is suboptimal on all platforms (Windows and * nices), up to Python3.1. For both Windows and * nices os.walk you can get around in one pass without calling isdir , because both FindFirst / FindNext (Windows) and opendir / readdir (* nix) already return the file type via lpFindFileData->dwFileAttributes (Windows) and dirent::d_type (* nix).

Perhaps intuitively, in most modern configurations (for example, Win7 and NTFS, and even some SMB implementations) GetFileAttributesEx twice as slow as FindFirstFile single file (possibly even slower than iterating over a directory with FindNextFile .)

Update: Python 3.5 includes the new PEP 471 os.scandir() , which solves this problem by returning the file attributes along with the file name. This new feature is used to speed up the built-in os.walk() (for both Windows and Linux). You can use the scandir module on PyPI to get this behavior for older versions of Python, including 2.x.

+73
Mar 21
source share

If you need the same speed as Explorer, why not use Windows scripts to access the same functions using pythoncom, for example.

 import win32com.client as com folderPath = r"D:\Software\Downloads" fso = com.Dispatch("Scripting.FileSystemObject") folder = fso.GetFolder(folderPath) MB = 1024 * 1024.0 print("%.2f MB" % (folder.Size / MB)) 

It will work in the same way as Explorer, you can learn more about the execution time of scripts at http://msdn.microsoft.com/en-us/library/bstcxhf7(VS.85).aspx .

+20
Mar 21 '10 at 3:31
source share

I compared the performance of Python code with a 15k directory tree containing 190k files, and compared it with the du(1) command, which supposedly happens as fast as the OS. Python code took 3.3 seconds compared to a du, which took 0.8 seconds. It was on Linux.

I'm not sure there is much to squeeze out of Python code. We also note that the first start of du took 45 seconds, which was obvious before the corresponding i-nodes were in the block cache; therefore, this performance is highly dependent on how well the system manages its store. This will not surprise me if one or both:

  • os.path.getsize is suboptimal on Windows
  • Windows calculates the size of the contents of the directory after calculation
+5
Mar 21 '10 at 3:41
source share



All Articles