How to transfer all directory files to several directories with a given number of files?

I have a catalog containing over 27,000 images.

I want to split these files into folders, each of which contains about 500 images.

No matter how they are sorted, I just want to separate them.

+6
source share
4 answers

The following should work:

dest_base="destination" src_dir="src/" filesperdir=500 atfile=0 atdir=0 for file in $src_dir/*; do if ((atfile == 0)); then dest_dir=$(printf "$dest_base/%0.5d" $atdir) [[ -d $dest_dir ]] || mkdir -p $dest_dir fi mv $file $dest_dir ((atfile++)) if ((atfile >= filesperdir)); then atfile=0 ((atdir++)) fi done 
+6
source

A "simple" find / xargs will do:

 find -maxdepth 1 -type f -print0 | xargs -r -0 -P0 -n 500 sh -c 'mkdir newdir.$$; mv " $@ " newdir.$$/' xx 

Explanation:

  • to find
    • -maxdepth 1 prevents the search from recursively traversing any directories, security is not required if you know that you do not have directories.
    • -type f find files
    • -print0 separate files with null char instead of LF (to handle strange names)
  • xargs
    • -r do not start with an empty argument list
    • -0 read files separated by zero
    • -P0 create as many processes as you need
    • -n 500 start each process with 500 arguments
  • w
    • -c execute script command line as next argument
    • mkdir newdir.$$ create a new directory ending with the shell PID process.
    • mv " $@ " newdir.$$/ move the script arguments (each one is quoted) to the newly created directory
    • xx name for the command line provided that the script (see sh manual)

Please note that this is not what I will use in the production process, this is mainly due to the fact that $$ (pid) will be different for each process executed by xargs

If you need sorted files, you can pull sort -z between find xargs.

If you need more meaningful directory names, you can use something like this:

 echo 1 >../seq find -maxdepth 1 -type f -print0 |sort -z | xargs -r -0 -P1 -n 500 sh -c 'read NR <../seq; mkdir newdir.$NR; mv " $@ " newdir.$NR/; expr $NR + 1 >../seq' xx 
  • echo 1 > ../seq write the first directory suffix in the file (make sure it is not in the current directory)
  • -P1 tell -P1 to run one team at a time to prevent race conditions.
  • read NR <../seq read current directory suffix from file
  • expr $NR + 1 >../seq write the following directory suffix for the next run
  • sort -z sort files
+9
source

OK, the next solution stores temporary files with lists of 500 file names. Adapt it as needed. First, we list all the files in the current directory, divide 500 by 500, and save the results in outputXYZ files. *

 ls | split -l 500 - outputXYZ. # Then we go through all those files count=0 for i in outputXYZ.*; do ((count++)) # We store the result in dir.X directory (created in current directory) mkdir dir.$count 2>/dev/null # And move those files into it cat $i | xargs mv -t dir.$count # remove the temp file rm $i done 

In the end, you will get all your images in the directories dir.1 (1..500), dir.2 (501..1000), dir.3, etc.

0
source

You can start with this:

 mkdir new_dir ; find source_dir | head -n 500 | xargs -I {} mv {} new_dir 

This will create new_dir and move 500 files from old_dir to new_dir . You still have to manually call this for different new_dir values ​​until the old directory is empty and you have to deal with file names that contain special characters.

-1
source

Source: https://habr.com/ru/post/907327/


All Articles