Python caching options or urlopen acceleration

Hi everyone, I have a website that is looking for information for the end user, written in Python and requires several urlopen commands. As a result, it takes a little to load the page. I was wondering if there is a way to do this faster? Is there a simple Python way to cache or a way to make urlopen scripts interesting?

Uropenes gain access to the Amazon API to get prices, so the site should be somewhat relevant. The only option I can think of is to make a script to make mySQL db and run it someday, but that would be unpleasant.

Thank!

+3
source share
5 answers

httplib2 understands HTTP request caching, urllib / urllib2 messiness abstracts are few and has other positive effects, such as gzip support.

http://code.google.com/p/httplib2/

But besides using this to get data, if the data set is not very large, I would also implement some kind of caching of / memoizing functions. Example: http://wiki.python.org/moin/PythonDecoratorLibrary#Memoize

It would not be too difficult to modify this decorator so that expiration could be used, for example. Only cache the result for 15 minutes.

If there are more results, you need to start searching memcached / redis.

+3
source

? ( ), cron script ( ), , .

, API Amazon - .

0

, .

  • urllib , , , Amazon -.

  • , script, , cron (). .

  • URL /, .

0

memcached. , /. Python, :

python-memcache (memcached) Python?

memcached, , cron script, .

, , , / cPickle ( , ).

0

, whit asyncore http://docs.python.org/library/asyncore.html

Thus, you can easily load multiple pages at once.

0
source

Source: https://habr.com/ru/post/1759328/


All Articles