From my experience, the best approach is to parse it once, using either csvread (which uses dlmread, which uses textscan - so the time penalty is not significant). Of course, this means that a very large file does not exceed the amount of free RAM. If a very large file is larger than RAM (I just had to parse a 31 gigabyte file, for example), then I would use fopen, read line by line (or chunks, block whatever you prefer), and write them to a writeable file mate. Thus, theoretically, you can write huge files limited by your file system.
source share