If you use Linux or OSX or otherwise have access to unix awk utilities (and possibly sort), you can do the following to get the number and approximate size using dump analysis:
# substitute '.dump' for '.dump mytable' if you want to limit to specific table sqlite3 db.sqlite3 '.dump' | awk -f sqlite3_size.awk
which returns:
table count est. size my_biggest_table 1090 60733958 my_table2 26919 7796902 my_table3 10390 2732068
and uses the awk script:
/INSERT INTO/ { # parse INSERT commands split($0, values, "VALUES"); # extract everything after VALUES split(values[1], name, "INSERT INTO"); # get tablename tablename = name[2]; # gsub(/[\047\042]/, "", tablename); # remove single and double quotes from name gsub(/[\047,]/, "", values[2]); # remove single-quotes and commas sizes[tablename] += length(values[2]) - 3; # subtract 3 for parens and semicolon counts[tablename] += 1; } END { print "table\tcount\test. size" for(k in sizes) { # print and sort in descending order: print k "\t" counts[k] "\t" sizes[k] | "sort -k3 -n -r"; # or, if you don't have the sort command: print k "\t" counts[k] "\t" sizes[k]; } }
Estimated size is based on the line length of the "INSERT INTO" command, and therefore will not equal the actual size on disk, but for me, the number plus the estimated size is more useful than other alternatives, such as the number of pages.
mwag 03 Feb '19 at 21:50 2019-02-03 21:50
source share