Redis-Scrapy分布式爬虫:当当网图书为例
Scrapy-Redis分布式策略:
Scrapy_redis在scrapy的基礎上實現了更多,更強大的功能,具體體現在:
reqeust去重,爬蟲持久化,和輕松實現分布式
假設有四臺電腦:Windows 10、Mac OS X、Ubuntu 16.04、CentOS 7.2,任意一臺電腦都可以作為 Master端 或 Slaver端,比如:
-
Master端(核心服務器) :使用 Windows 10,搭建一個Redis數據庫,不負責爬取,只負責url指紋判重、Request的分配,以及數據的存儲
-
Slaver端(爬蟲程序執行端) :使用 Mac OS X 、Ubuntu 16.04、CentOS 7.2,負責執行爬蟲程序,運行過程中提交新的Request給Master
首先Slaver端從Master端拿任務(Request、url)進行數據抓取,Slaver抓取數據的同時,產生新任務的Request便提交給 Master 處理;
Master端只有一個Redis數據庫,負責將未處理的Request去重和任務分配,將處理后的Request加入待爬隊列,并且存儲爬取的數據。
Scrapy-Redis默認使用的就是這種策略,我們實現起來很簡單,因為任務調度等工作Scrapy-Redis都已經幫我們做好了,我們只需要繼承RedisSpider、指定redis_key就行了。
缺點是,Scrapy-Redis調度的任務是Request對象,里面信息量比較大(不僅包含url,還有callback函數、headers等信息),可能導致的結果就是會降低爬蟲速度、而且會占用Redis大量的存儲空間,所以如果要保證效率,那么就需要一定硬件水平。
當當網圖書信息抓取案例:
1、創建Scrapy項目
使用全局命令startproject創建項目,創建新文件夾并且使用命令進入文件夾,創建一個名為jingdong的Scrapy項目。
[python]?view plain?copy
2.使用項目命令genspider創建Spider
[python]?view plain?copy
3、發送請求,接受響應,提取數據
# -*- coding: utf-8 -*- import scrapy from scrapy_redis.spiders import RedisSpider from copy import deepcopyclass DangdangSpider(RedisSpider):name = 'dangdang'allowed_domains = ['dangdang.com']# start_urls = ['http://book.dangdang.com/']redis_key = "dangdang"def parse(self, response):div_list = response.xpath("//div[@class='con flq_body']/div")# print(len(div_list),"("*100)for div in div_list:#大分類item = {}item["b_cate"] = div.xpath("./dl/dt//text()").extract()#中間分類dl_list = div.xpath("./div//dl[@class='inner_dl']")# print(len(dl_list),")"*100)for dl in dl_list:item["m_cate"] = dl.xpath("./dt/a/text()").extract_first()#獲取小分類a_list = dl.xpath("./dd/a")# print("-"*100,len(a_list))for a in a_list:item["s_cate"] = a.xpath("./@title").extract_first()item["s_href"] = a.xpath("./@href").extract_first()if item["s_href"] is not None:yield scrapy.Request( #發送圖書列表頁的請求item["s_href"],callback=self.parse_book_list,meta = {"item":deepcopy(item)})def parse_book_list(self,response):item = response.meta["item"]li_list = response.xpath("//ul[@class='bigimg']/li")for li in li_list:item["book_title"] = li.xpath("./a/@title").extract_first()item["book_href"] = li.xpath("./a/@href").extract_first()item["book_detail"] = li.xpath("./p[@class='detail']/text()").extract_first()item["book_price"] = li.xpath(".//span[@class='search_now_price']/text()").extract_first()item["book_author"] = li.xpath("./p[@class='search_book_author']/span[1]/a/@title").extract_first()item["book_publish_date"] = li.xpath("./p[@class='search_book_author']/span[2]/text()").extract_first()item["book_press"] = li.xpath("./p[@class='search_book_author']/span[3]/a/@title").extract_first()print(item)
4.pipelines設置保存文件:
# -*- coding: utf-8 -*-# Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.htmlclass BookPipeline(object):def process_item(self, item, spider):item["book_name"] = item["book_name"].strip() if item["book_name"] is not None else Noneitem["book_publish_date"] = item["book_publish_date"].strip() if item["book_publish_date"] is not None else Noneprint(item)# return item
5.配置settings設置,文件保存在redis中:
注意:setting中的配置都是可以自己設定的,意味著我們的可以重寫去重和調度器的方法,包括是否要把數據存儲到redis(pipeline)view plain?cop
# -*- coding: utf-8 -*-# Scrapy settings for book project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # http://doc.scrapy.org/en/latest/topics/settings.html # http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html # http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.htmlBOT_NAME = 'book'SPIDER_MODULES = ['book.spiders'] NEWSPIDER_MODULE = 'book.spiders'#實現scrapyredis的功能,持久化的功能 DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter" SCHEDULER = "scrapy_redis.scheduler.Scheduler" SCHEDULER_PERSIST = True REDIS_URL = "redis://127.0.0.1:6379"# Crawl responsibly by identifying yourself (and your website) on the user-agent USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'# Obey robots.txt rules ROBOTSTXT_OBEY = False# Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0) # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs #DOWNLOAD_DELAY = 3 # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN = 16 #CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default) #COOKIES_ENABLED = False# Disable Telnet Console (enabled by default) #TELNETCONSOLE_ENABLED = False# Override the default request headers: #DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', #}# Enable or disable spider middlewares # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html #SPIDER_MIDDLEWARES = { # 'book.middlewares.BookSpiderMiddleware': 543, #}# Enable or disable downloader middlewares # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html #DOWNLOADER_MIDDLEWARES = { # 'book.middlewares.MyCustomDownloaderMiddleware': 543, #}# Enable or disable extensions # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html #EXTENSIONS = { # 'scrapy.extensions.telnet.TelnetConsole': None, #}# Configure item pipelines # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES = {'book.pipelines.BookPipeline': 300, }# Enable and configure the AutoThrottle extension (disabled by default) # See http://doc.scrapy.org/en/latest/topics/autothrottle.html #AUTOTHROTTLE_ENABLED = True # The initial download delay #AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default) # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED = True #HTTPCACHE_EXPIRATION_SECS = 0 #HTTPCACHE_DIR = 'httpcache' #HTTPCACHE_IGNORE_HTTP_CODES = [] #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
6.進行爬取:執行項目命令crawl,啟動Spider:
[python]?view plain?copy
?
注意:setting中的配置都是可以自己設定的,意味著我們的可以重寫去重和調度器的方法,包括是否要把數據存儲到redis(pipeline)]?view plain?cop
總結
以上是生活随笔為你收集整理的Redis-Scrapy分布式爬虫:当当网图书为例的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 两个栈实现队列与两个队列实现栈
- 下一篇: 小大整数对象池及intern机制