HDFS as volume in cloudera quickstart docker

I am new to both the team and the docker.

I was working on a docker file extension for the cloudera / quickstart docker and wanted to mount the directory form node and map it to the hdfs location so that performance would increase and data would be saved locally.

When I mount the volume anywhere with -v /localdir:/someDir, everything works fine, but that is not my goal. But when I do -v /localdir:/var/lib/hadoop-hdfs, both datanode and namenode do not start, and I get: "cd / var / lib / hadoop-hdfs: Permission denied". And when I do the -v /localdir:/var/lib/hadoop-hdfs/cacherejection is not allowed, but the datanode and namenode, or one of them does not start when the docker image is launched, and I can not find useful information in the log files about the reason for this.

Did someone come across this problem, or tried another solution for placing hdfs outside the docker container?

+4
source share
1 answer

You have to run

docker exec -it "YOUR CLOUDERA CONTAINER" chown -R hdfs:hadoop /var/lib/hadoop-hdfs/ 
0
source

Source: https://habr.com/ru/post/1648742/


All Articles