I have a Perl script that parses a data file and writes 5 output files filled with a 1100 x 1300 grid. The script works, but in my opinion it is awkward and probably inefficient. The script is also inherited code, which I modified a bit to make it more readable. However, this is a mess.
Currently, the script reads the data file (~ 4Mb) and puts it in an array. Then it iterates over the array, analyzing its contents and pushing the values into another array and finally printing them to a file in another loop. If the value is not defined for a specific point, it prints 9999. Zero is a valid value.
The data file has 5 different parameters, and each of them is written to its own file.
Sample data:
data for the param: 2
5559
// (x,y) count values
280 40 3 0 0 0
280 41 4 0 0 0 0
280 42 5 0 0 0 0 0
281 43 4 0 0 10 10
281 44 4 0 0 10 10
281 45 4 0 0 0 10
281 46 4 0 0 10 0
281 47 4 0 0 10 0
281 48 3 10 10 0
281 49 2 0 0
41 50 3 0 0 0
45 50 3 0 0 0
280 50 2 0 0
40 51 8 0 0 0 0 0 0 0 0
...
data for the param: 3
3356
// (x,y) count values
5559 - . : x, y, x- , , ,
.
, script, , . , . , .
, for-loops?
EDIT:
, , .
1100 x 1300, , . . , x (+ n), y .
UPDATE:
, , , script (~ 3 ). script 50% , , script. , 3- .
script. , . ?
for my $i (0..$#indata) {
...
if($indata[$i] =~ /^data for the param:/) {
push @block, $i;
}
...
}
for my $k (0..4) {
...
if( $k eq '4') {
$enddata = $#indata;
}
else {
$enddata = $block[$k+1];
}
...
for my $p ($block[$k]..$enddata) {
...
for(my $m=0 ; $m<$n ; $m++){
$data[$x][$y] = $values[$m];
}
}
print2file();
}