How I rated this for my current project with a dataset of 2.5M records in one table.
I read the information and counted the records, for example, I needed to find the identifiers of the records, and the "name" field was updated more than once at certain intervals. The Django test used ORM to retrieve all the records, and then iterate through them. The data was saved in a list for further processing. No debug output other than the print result at the end.
On the other hand, I used MySQLdb, which performed the same queries (received from Django) and built the same structure, using classes to store data and store instances in a list for further processing. No debug output other than the print result at the end.
I found that:
without Django with Django execution time x 10x memory consumption y 25y
And I only read and counted without performing updates / inserts of queries.
Try to research this question for yourself; the benchmark is not difficult to write down and execute.
source share