It really depends on how “large” and available resources (in this case RAM) are for this task.
"The feedback received reported that this is bad, as for large files it will be very slow."
CSV files are usually used to move data (in most cases I came across, the files are ~ 1 MB + up to ~ 10 MB, but this does not mean that others will not dump more data in CSV format) without worrying too much ( if at all) about import / export, as it is extremely simplified.
Suppose you have a 80 MB CSV file, now that the file you want to process in pieces, otherwise (depending on your processing) you can eat hundreds of MB of RAM, in which case I would do:
while dataToProcess do begin // step1 read <X> lines from file, where <X> is the max number of lines you read in one go, if there are less lines(ie you're down to 50 lines and X is 100) to process, then you read those // step2 process information // step3 generate output, database inserts, etc. end;
In the above case, you are not loading 80 MB of data into RAM, but only a few hundred KB, and the rest that you use for processing, i.e. linked lists, dynamic insertion requests (batch insertion), etc.
"... however, I was noted and did not receive an interview due to the use of TADODataset.
I'm not surprised, they probably wanted to see if you can create an algorithm and provide simple solutions in place, but without using ready-made solutions.
They probably thought that you were using dynamic arrays and creating one (or more) sorting algorithm.
"Should I use string lists or something like that?"
The answer could be the same, again, I think they wanted to see how you "work."