Python asynchronous http request

I am trying to use twitter search web service in python. I want to call a web service, for example:

http://search.twitter.com/search.json?q=blue%20angels&rpp=5&include_entities=true&result_type=mixed 

from my python program.

Can anybody tell me

  • how to use xmlhttprequst object in python

  • how to pass parameters to it, and

  • how to get data in a dictionary.

Here is my attempt:

 import urllib import sys url = "http://search.twitter.com/search.json?q=blue%20angels&rpp=5&include_entities=true&result_type=mixed" urlobj = urllib.urlopen(url) data = urlobj.read() print data 

Thanks.

+4
source share
2 answers

You do not need an "asynchronous httprequest" to use the twitter search api:

 import json import urllib import urllib2 # make query query = urllib.urlencode(dict(q="blue angel", rpp=5, include_entities=1, result_type="mixed")) # make request resp = urllib2.urlopen("http://search.twitter.com/search.json?" + query) # make dictionary (parse json response) d = json.load(resp) 

There are probably several libraries that provide a good OO interface around these HTTP requests.

To make multiple requests at the same time, you can use gevent :

 import gevent import gevent.monkey; gevent.monkey.patch_all() # patch stdlib import json import urllib import urllib2 def f(querystr): query = urllib.urlencode(dict(q=querystr, rpp=5, include_entities=1, result_type="mixed")) resp = urllib2.urlopen("http://search.twitter.com/search.json?" + query) d = json.load(resp) print('number of results %d' % (len(d['results']),)) jobs = [gevent.spawn(f, q) for q in ['blue angel', 'another query']] gevent.joinall(jobs) # wait for completion 
+8
source

I would recommend checking requests and async .

A simple request:

 import json import requests params = {'rpp': 5, 'include_entities': 1, 'result_type': 'mixed', 'q': 'blue angel'} r = requests.get('http://search.twitter.com/search.json', params=params) print json.loads(r.text) 

Asynchronous:

 import json from requests import async def add_option_params(args): options = {'rpp': 5, 'include_entities': 1, 'result_type': 'mixed'} args['params'].update(options) return args requests = [] for search_term in ['test1', 'test2', 'test3']: request = async.get('http://search.twitter.com/search.json', params={'q': search_term}, hooks={'args': add_option_params}) requests.append(request) for result in async.map(requests): print result.url, json.loads(result.text)['completed_in'] 
+4
source

Source: https://habr.com/ru/post/1402032/


All Articles