Will this change require a restart of the cluster or will it be reviewed automatically and all new files will have a default block size of 128
For this property change to take effect, a cluster restart is required.
What will happen to existing 64M block size files? Will the configuration change be applied to existing files automatically?
Existing blocks will not change their block size.
If this is not done automatically, then how to manually make this block change?
To modify existing files, you can use distcp. It will copy files with the new block size. However, you will have to manually delete the old files with the older block size. Here you can use the command
hadoop distcp -Ddfs.block.size=XX /path/to/old/files /path/to/new/files/with/larger/block/sizes.
source share