Is there a command to write random garbage bytes to a file?

Now I am doing some tests of my application, again damaged files. But I found it hard to find test files.

So I'm wondering if there are any existing tools that can write random / garbage bytes to a file of some format.

Basically, I need this tool for:

  • It writes random junk bytes to the file.
  • No need to know the file format, just writing random bytes for me.
  • It is best to write at random positions on the target file.
  • Batch processing is also a bonus.

Thank.

+41
command-line linux testing
Aug 30 '10 at 7:32
source share
3 answers

The /dev/urandom pseudo device along with dd can do this for you:

 dd if=/dev/urandom of=newfile bs=1M count=10 

This will create a newfile size 10M.

The device /dev/random often blocked, if not enough randomness is created, urandom will not be blocked. If you use randomness for crypto class things, you can avoid urandom . For something else, it should be enough and most likely faster.

If you want to mess up just the bits of your file (not the whole file), you can just use random C-style functions. Just use rnd() to calculate the offset and length n , then use it n times to capture random bytes to overwrite the file.




The following Perl script shows how this can be done (without worrying about compiling C code):

 use strict; use warnings; sub corrupt ($$$$) { # Get parameters, names should be self-explanatory. my $filespec = shift; my $mincount = shift; my $maxcount = shift; my $charset = shift; # Work out position and size of corruption. my @fstat = stat ($filespec); my $size = $fstat[7]; my $count = $mincount + int (rand ($maxcount + 1 - $mincount)); my $pos = 0; if ($count >= $size) { $count = $size; } else { $pos = int (rand ($size - $count)); } # Output for debugging purposes. my $last = $pos + $count - 1; print "'$filespec', $size bytes, corrupting $pos through $last\n"; 

  # Open file, seek to position, corrupt and close. open (my $fh, "+<$filespec") || die "Can't open $filespec: $!"; seek ($fh, $pos, 0); while ($count-- > 0) { my $newval = substr ($charset, int (rand (length ($charset) + 1)), 1); print $fh $newval; } close ($fh); } # Test harness. system ("echo =========="); #DEBUG system ("cp base-testfile testfile"); #DEBUG system ("cat testfile"); #DEBUG system ("echo =========="); #DEBUG corrupt ("testfile", 8, 16, "ABCDEFGHIJKLMNOPQRSTUVWXYZ "); system ("echo =========="); #DEBUG system ("cat testfile"); #DEBUG system ("echo =========="); #DEBUG 

It consists of the corrupt function that you call with the file name, minimum and maximum size of corruption, and the character set to extract. The bit below is just a unit testing code. The following is an example of output in which you can see that the file section is damaged:

 ========== this is a file with nothing in it except for lowercase letters (and spaces and punctuation and newlines). that will make it easy to detect corruptions from the test program since the character range there is from uppercase a through z. i have to make it big enough so that the random stuff will work nicely, which is why i am waffling on a bit. ========== 'testfile', 344 bytes, corrupting 122 through 135 ========== this is a file with nothing in it except for lowercase letters (and spaces and punctuation and newlines). that will make iFHCGZF VJ GZDYct corruptions from the test program since the character range there is from uppercase a through z. i have to make it big enough so that the random stuff will work nicely, which is why i am waffling on a bit. ========== 

It is tested at a basic level, but you may find that there are cases of errors in the province that you need to take care of. Do what you want with this.

+74
Aug 30 '10 at 7:36
source share

Just for completeness, here is another way to do this:

 shred -s 10 - > my-file 

Writes 10 random bytes to stdout and redirects them to a file. shred usually used to destroy (safe write) data, but it can also be used to create new random files. Therefore, if you already have a file that you want to fill with random data, use this:

 shred my-existing-file 
+18
Aug 30 '10 at 7:56
source share

You can read from /dev/random :

 # generate a 50MB file named `random.stuff` filled with random stuff ... dd if=/dev/random of=random.stuff bs=1000000 count=50 

You can also indicate the size also from the point of view of a person:

 # generate just 2MB ... dd if=/dev/random of=random.stuff bs=1M count=2 
+5
Aug 30 '10 at 7:36
source share



All Articles