In profiling my app python2.7 app engine, I found that it takes an average of 7 ms per record to deserialize the records extracted from ndb into python objects. (In pb_to_query_result , pb_to_entity and their descendants, this does not include RPC time to query the database and retrieve raw records.)
Is this expected? My model has six properties, one of which is LocalStructuredProperty with 15 properties, which also includes a repeated StructuredProperty with four properties, but the middle object should have less than 30 properties, which, as I said, will contain.
Is such a slowdown expected? I want to get several thousand records to do a simple population analysis, and although I can tolerate a certain amount of latency, more than 10 seconds is a problem. Is there anything I can do to rebuild my models or my circuit to make it more viable? (Besides the obvious solution of pre-calculating my aggregated analysis on a regular basis and caching the results.)
If it is unusual for this to be slow, it would be useful to know that I can go and look for what I can do that makes it worse.
source share