I agree with the answer, which says that itโs best to do what you do in the background thread, and I donโt want you to insist on this in your main thread.
However, if you go to the command line and do the following:
dir c:\*.* /s > dump.txt & notepad dump.txt
You may be surprised how quickly Notepad appears.
So, you can speed up your GetAllSubFolders a bit, even if you save them in your main thread, for example. copy the code by calling main.Memo1.Lines.BeginUpdate and main.Memo1.Lines.EndUpdate, also main.Listbox1.Items.BeginUpdate and EndUpdate. This will stop updating these controls at runtime (which actually means that your code spends most of the time on this, and on โif Pos (...)โ, which I commented on below). And, if you haven't gathered yet, Application.ProcessMessages is evil (mostly).
I did some timings on my D: drive, which is a 500 GB SSD with 263562 files in 35949 directories.
- Code in your q: 6777 sec
- Writing to a notebook as described above: 15 seconds
- The code below, in the main thread: 9.7 seconds.
The reason I included the code below in this answer is because you find it much easier to execute on the stream, because it collects the results in a TStringlist, the contents of which you can then assign to your note and the list is complete.
A few comments on the code in your q, which I think you might get from somewhere.
It is pointlessly recursive, even if the current record in Rec is a simple file. The code below only repeats if the current Rec record is a directory.
It looks like he is trying to avoid duplicates using the โif Pos (...)โ business, which should not be necessary (except, maybe if there is a symbolic link (for example, created using the MkLink command) somewhere that indicates elsewhere on the disk) and makes it very inefficient, i.e. by searching for the file name in the contents of the memo - they will become longer and longer, as it finds more files). In the code below, the string list is configured to remove duplicates and has the Sorted property set to True, which makes it check for duplicates much faster, because it can perform a binary search through its contents, not the serial one.
It computes Path + Rec.Name 6 times for each thing found, which is probably inefficient with r / t and inflates the source code. This is only a minor point, although compared with the first two.
code:
function GetAllSubFolders(sPath: String) : TStringList; procedure GetAllSubFoldersInner(sPath : String); var Path, AFileName, Ext: String; Rec: TSearchRec; Done: Boolean; begin Path := IncludeTrailingBackslash(sPath); if FindFirst(Path + '*.*', faAnyFile, Rec) = 0 then begin Done := False; while not Done do begin if (Rec.Name <> '.') and (Rec.Name <> '..') then begin AFileName := Path + Rec.Name; Ext := ExtractFileExt(AFileName).ToLower; if not ((Rec.Attr and faDirectory) = faDirectory) then begin Result.Add(AFileName) end else begin GetAllSubFoldersInner(AFileName); end; end; Done := FindNext(Rec) <> 0; end; FindClose(Rec); end; end; begin Result := TStringList.Create; Result.BeginUpdate; Result.Sorted := True; Result.Duplicates := dupIgnore; // don't add duplicate filenames to the list GetAllSubFoldersInner(sPath); Result.EndUpdate; end; procedure TMain.Button1Click(Sender: TObject); var T1, T2 : Integer; TL : TStringList; begin T1 := GetTickCount; TL := GetAllSubfolders('D:\'); try Memo1.Lines.BeginUpdate; try Memo1.Lines.Text := TL.Text; finally Memo1.Lines.EndUpdate; end; T2 := GetTickCount; Caption := Format('GetAll: %d, Load: %d, Files: %d', [T2 - T1, GetTickCount - T2, TL.Count]); finally TL.Free; end; end;