Using SQLite Disk Space

How can I find out about disk usage in a single table inside an SQLite database without copying it to a new empty database?

+54
sqlite
May 05 '11 at 15:09
source share
4 answers

You can use sqlite3_analyzer from http://www.sqlite.org/download.html .

This is a really cool tool. It shows the number of pages used by each table with and without indexes (each page defaults to 1024 bytes).

This is a sample sqlite3_analyzer output for the Northwind database:

*** Page counts for all tables with their indices ******************** EMPLOYEES............................. 200 34.4% ORDERS................................ 152 26.2% CATEGORIES............................ 90 15.5% ORDER DETAILS......................... 81 13.9% CUSTOMERS............................. 17 2.9% SQLITE_MASTER......................... 11 1.9% PRODUCTS.............................. 7 1.2% SUPPLIERS............................. 7 1.2% TERRITORIES........................... 6 1.0% CUSTOMERCUSTOMERDEMO.................. 2 0.34% CUSTOMERDEMOGRAPHICS.................. 2 0.34% EMPLOYEETERRITORIES................... 2 0.34% REGION................................ 2 0.34% SHIPPERS.............................. 2 0.34% 

It also generates SQL statements that can be used to create a database with usage statistics, which can then be analyzed.

+79
Feb 10 '12 at 22:30
source share

I understand that this answer completely violates the spirit of the question, but it gives you the size without copying the file ...

 $ ls -lh db.sqlite -rw-r--r-- 1 dude bros 44M Jan 11 18:44 db.sqlite $ sqlite3 db.sqlite sqlite> drop table my_table; sqlite> vacuum; sqlite> ^D $ ls -lh db.sqlite -rw-r--r-- 1 dude bros 23M Jan 11 18:44 db.sqlite 
+5
Jan 12 '17 at 2:54 on
source share

If you use Linux or OSX or otherwise have access to unix awk utilities (and possibly sort), you can do the following to get the number and approximate size using dump analysis:

 # substitute '.dump' for '.dump mytable' if you want to limit to specific table sqlite3 db.sqlite3 '.dump' | awk -f sqlite3_size.awk 

which returns:

 table count est. size my_biggest_table 1090 60733958 my_table2 26919 7796902 my_table3 10390 2732068 

and uses the awk script:

 /INSERT INTO/ { # parse INSERT commands split($0, values, "VALUES"); # extract everything after VALUES split(values[1], name, "INSERT INTO"); # get tablename tablename = name[2]; # gsub(/[\047\042]/, "", tablename); # remove single and double quotes from name gsub(/[\047,]/, "", values[2]); # remove single-quotes and commas sizes[tablename] += length(values[2]) - 3; # subtract 3 for parens and semicolon counts[tablename] += 1; } END { print "table\tcount\test. size" for(k in sizes) { # print and sort in descending order: print k "\t" counts[k] "\t" sizes[k] | "sort -k3 -n -r"; # or, if you don't have the sort command: print k "\t" counts[k] "\t" sizes[k]; } } 

Estimated size is based on the line length of the "INSERT INTO" command, and therefore will not equal the actual size on disk, but for me, the number plus the estimated size is more useful than other alternatives, such as the number of pages.

+3
03 Feb '19 at 21:50
source share

I ran into problems with other answers here (namely sqlite_analyzer does not work on Linux). 'The creation of the following Bash function has ended for (temporary) writing each table to disk for estimating disk size. Technically, this is copying a database, which is not in keeping with the spirit of the OP question, but it gave me the information that I followed.

 function sqlite_size() { TMPFILE="/tmp/__sqlite_size_tmp" DB=$1 IFS=" " TABLES='sqlite3 $DB .tables' for i in $TABLES; do \rm -f "$TMPFILE" sqlite3 $DB ".dump $i" | sqlite3 $TMPFILE echo $i 'cat $TMPFILE | wc -c' \rm -f "$TMPFILE" done } 

Example:

 $ sqlite_size sidekick.sqlite SequelizeMeta 12288 events 16384 histograms 20480 programs 20480 
0
Apr 04 '19 at 20:55
source share



All Articles