You should start by adding use strict
and use warnings
at the beginning of your program and declaring all variables with my
at the beginning of their use. This will show many simple errors that are otherwise difficult to detect.
You should also use a three-parameter for open
and lexical file descriptors, and the Perl idiom for checking exceptions when opening files is to add or die
to the open
call. if
with an empty block for the waste space the paths to success become unreadable. The open
call should look like this:
open my $fh, '>', 'myfile' or die "Unable to open file: $!";
Finally, it is much safer to use the Perl module when you are processing CSV files, as there are many errors when using simple split /,/
. The Text::CSV
module has done all the work for you and is available in CPAN.
The problem is that after reading to the end of the first file, you do not rewind or reopen it before re-reading from the same descriptor in the second nested loop. This means that the data from this file will no longer be read, and the program will behave as if it is empty.
A bad strategy is to read the same file hundreds of times to compose the corresponding records. If the file is of reasonable size, you must create a data structure in memory to store information. The Perl hash is ideal because it allows you to instantly search for data matching a given name.
I wrote a version of your code that demonstrates these points. It would be awkward for me to check the code, since I do not have sample data, but if you still have problems, let us know.
use strict; use warnings; use Text::CSV; my $csv = Text::CSV->new; my %data;
source share