Get HDFS folder size from java

I have an HDFS folder size that has subdirectories from java.

From the command line we can use the -dus option, but everyone can help me on how to get the same with java.

+6
source share
4 answers

The getSpaceConsumed() function in the ContentSummary class will return the actual space that the file / directory occupies in the cluster, that is, takes into account the replication coefficient set for the cluster.

For example, if the replication rate in the hadoop cluster is 3 and the directory size is 1.5 GB, the getSpaceConsumed() function will return a value of 4.5 GB.

Using the getLength() function in the ContentSummary class will return you the actual file / directory size.

+23
source

You can use the getContentSummary(Path f) method provided by the FileSystem class. It returns a ContentSummary object by which the getSpaceConsumed() method can be called, which will give you the size of the directory in bytes.

Using:

 package org.myorg.hdfsdemo; import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; public class GetDirSize { /** * @param args * @throws IOException */ public static void main(String[] args) throws IOException { // TODO Auto-generated method stub Configuration config = new Configuration(); config.addResource(new Path( "/hadoop/projects/hadoop-1.0.4/conf/core-site.xml")); config.addResource(new Path( "/hadoop/projects/hadoop-1.0.4/conf/core-site.xml")); FileSystem fs = FileSystem.get(config); Path filenamePath = new Path("/inputdir"); System.out.println("SIZE OF THE HDFS DIRECTORY : " + fs.getContentSummary(filenamePath).getSpaceConsumed()); } } 

NTN

+14
source

Thanks guys.

Scala version

 package com.beloblotskiy.hdfsstats.model.hdfs import java.nio.file.{Files => NioFiles, Paths => NioPaths} import org.apache.hadoop.conf.Configuration import org.apache.hadoop.fs.FileSystem import org.apache.hadoop.fs.Path import org.apache.commons.io.IOUtils import java.nio.file.{Files => NioFiles} import java.nio.file.{Paths => NioPaths} import com.beloblotskiy.hdfsstats.common.Settings /** * HDFS utilities * @author v-abelablotski */ object HdfsOps { private val conf = new Configuration() conf.addResource(new Path(Settings.pathToCoreSiteXml)) conf.addResource(new Path(Settings.pathToHdfsSiteXml)) private val fs = FileSystem.get(conf) /** * Calculates disk usage with replication factor. * If function returns 3G for folder with replication factor = 3, it means HDFS has 1G total files size multiplied by 3 copies space usage. */ def duWithReplication(path: String): Long = { val fsPath = new Path(path); fs.getContentSummary(fsPath).getSpaceConsumed() } /** * Calculates disk usage without pay attention to replication factor. * Result will be the same with hadopp fs -du /hdfs/path/to/directory */ def du(path: String): Long = { val fsPath = new Path(path); fs.getContentSummary(fsPath).getLength() } //... } 
+4
source

Spark-shell tool to display all tables and their consumption

A typical and illustrative tool for Spark-shell, a looping cycle through all databases, tables and sections, for obtaining sizes and reports in a CSV file:

 // sshell -i script.scala > ls.csv import org.apache.hadoop.fs.{FileSystem, Path} def cutPath (thePath: String, toCut: Boolean = true) : String = if (toCut) thePath.replaceAll("^.+/", "") else thePath val warehouse = "/apps/hive/warehouse" // the Hive default location for all databases val fs = FileSystem.get( sc.hadoopConfiguration ) println(s"base,table,partitions,bytes") fs.listStatus( new Path(warehouse) ).foreach( x => { val b = x.getPath.toString fs.listStatus( new Path(b) ).foreach( x => { val t = x.getPath.toString var parts = 0; var size = 0L; // var size3 = 0L fs.listStatus( new Path(t) ).foreach( x => { // partition path is x.getPath.toString val p_cont = fs.getContentSummary(x.getPath) parts = parts + 1 size = size + p_cont.getLength //size3 = size3 + p_cont.getSpaceConsumed }) // t loop println(s"${cutPath(b)},${cutPath(t)},${parts},${size}") // display opt org.apache.commons.io.FileUtils.byteCountToDisplaySize(size) }) // b loop }) // warehouse loop System.exit(0) // get out from spark-shell 

PS: I checked, size3 is always 3 *, no additional information.

0
source

Source: https://habr.com/ru/post/1481105/


All Articles