You can get the number of records per section as follows:
df
.rdd
.mapPartitionsWithIndex{case (i,rows) => Iterator((i,rows.size))}
.toDF("partition_number","number_of_records")
.show
But it will also launch Spark Job on its own (because the file must be read by a spark in order to get the number of records).
Spark can also read hive table statistics, but I don’t know how to display this metadata.