Async Solr Queries in Python
2019獨(dú)角獸企業(yè)重金招聘Python工程師標(biāo)準(zhǔn)>>>
I frequently hit the wall of needing to work asynchronously with Solr requests in Python. I’ll have some code that blocks on a Solr HTTP request, waits for it to complete, then execute a second request. Something like this code
import requests#Search 1 solrResp = requests.get('http://mysolr.com/solr/statedecoded/search?q=law')for doc in solrResp.json()['response']['docs']:print doc['catch_line']#Search 2 solrResp = requests.get('http://mysolr.com/solr/statedecoded/search?q=shoplifting')for doc in solrResp.json()['response']['docs']:print doc['catch_line'] (we’re using the Requests library to do HTTP): Being able to parallelize work is especially helpful with scripts that index documents into Solr. I need to scale my work up so that Solr, not network access, is the indexing bottleneck.
Unfortunately, Python isn’t exactly Javascript or Go when it comes to doing asynchronous programming. But the gevent coroutine library can help us a bit with that. Under the hood gevent uses the libevent library. Built on top of native async calls (select, poll, etc — the original async ), libevent nicely leverages a lot of low-level async functionality.
Working with gevent is fairly straightforward. One slight sticking point is thegevent.monkey.patch_all()which patches a lot of the standard library to cooperate better with gevent’s asychrony. It sounds scary, but I have yet to have a problem with the monkey patched implementations.
Without further ado, here’s how you use gevents to do parallel Solr requests:
import requests from gevent import monkey import gevent monkey.patch_all()class Searcher(object):""" Simple wrapper for doing a search and collecting theresults """def __init__(self, searchUrl):self.searchUrl = searchUrldef search(self):solrResp = requests.get(self.searchUrl)self.docs = solrResp.json()['response']['docs']def searchMultiple(urls):""" Use gevent to execute the passed in urls;dump the results"""searchers = [Searcher(url) for url in urls]# Gather a handle for each taskhandles = []for searcher in searchers:handles.append(gevent.spawn(searcher.search))# Block until all work is donegevent.joinall(handles)# Dump the resultsfor searcher in searchers:print "Search Results for %s" % searcher.searchUrlfor doc in searcher.docs:print doc['catch_line']searchUrls = ['http://mysolr.com/solr/statedecoded/search?q=law', 'http://mysolr.com/solr/statedecoded/search?q=shoplifting']searchMultiple(searchUrls) Lots more code, and not nearly as pretty as the equivalent Javascript, but it gets the job done. The meat of the code is these lines: # Gather a handle for each task handles = [] for searcher in searchers:handles.append(gevent.spawn(searcher.search))# Block until all work is done gevent.joinall(handles) We tell gevent to spawnsearcher.search. This gives us a handle to the spawned task. We can then optionally wait for all the spawned tasks to complete, then dump the results.That’s about it! As always, comment if you have any thoughts on pointers. And let us know how we can help with any part of your Solr search application!
轉(zhuǎn)載于:https://my.oschina.net/letiantian/blog/323933
總結(jié)
以上是生活随笔為你收集整理的Async Solr Queries in Python的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: CC2530存储空间——Code
- 下一篇: 地址总线与内存大小的关系(待续…)