Scrapy-Item Pipeline(项目管道)
?
Item Pipeline(英文版):http://doc.scrapy.org/en/latest/topics/item-pipeline.html
Item Pipeline(中文版):https://scrapy-chs.readthedocs.io/zh_CN/latest/topics/item-pipeline.html
Scrapy 1.3 文檔? item Pipeline:https://oner-wv.gitbooks.io/scrapy_zh/content/基本概念/item管道.html
scrapy 多個(gè)pipeline:https://www.baidu.com/s?wd=scrapy 多個(gè)pipeline
不同的 spider item 如何指定不同的 pipeline處理?:https://segmentfault.com/q/1010000006890114?_ea=1166343
?
?
scrapy 源碼 pipelines 目錄下有三個(gè)文件:files.py、images.py、media.py 。scrapy 在這個(gè)三個(gè)文件中提供了三種不同的 pipeline?
Python3 scrapy下載網(wǎng)易云音樂所有(大部分)歌曲:https://blog.csdn.net/Heibaiii/article/details/79323634
使用 ImagesPipeline 下載圖片:https://blog.csdn.net/freeking101/article/details/87860892
下載壁紙圖,使用ImagesPipeline:https://blog.csdn.net/qq_28817739/article/details/79904391
?
?
Item Pipeline(項(xiàng)目管道)
?
在項(xiàng)目被蜘蛛抓取后,它被發(fā)送到項(xiàng)目管道,它通過順序執(zhí)行的幾個(gè)組件來處理它。
每個(gè)項(xiàng)目管道組件(有時(shí)稱為“Item Pipeline”)是一個(gè)實(shí)現(xiàn)簡(jiǎn)單方法的Python類。他們接收一個(gè)項(xiàng)目并對(duì)其執(zhí)行操作,還決定該項(xiàng)目是否應(yīng)該繼續(xù)通過流水線或被丟棄并且不再被處理。
項(xiàng)目管道的典型用途是:
- 清理HTML數(shù)據(jù)
- 驗(yàn)證抓取的數(shù)據(jù)(檢查項(xiàng)目是否包含特定字段)
- 檢查重復(fù)(并刪除)
- 將刮取的項(xiàng)目存儲(chǔ)在數(shù)據(jù)庫(kù)中
?
?
編寫自己的項(xiàng)目管道
?
每個(gè)項(xiàng)目管道組件是一個(gè)Python類
?
process_item(self, item,?spider)
每個(gè)項(xiàng)目管道組件都必須調(diào)用此方法。process_item() 必須:返回一個(gè)帶數(shù)據(jù)的dict,返回一個(gè)Item (或任何后代類)對(duì)象,返回一個(gè)Twisted Deferred或者raise DropItemexception。丟棄的項(xiàng)目不再由其他管道組件處理。
參數(shù):
- item:( Itemobject 或 dict ) -?
- Spider:自己編寫的 Spider ,即自定義的 Spider 類的對(duì)象
?
open_spider(self, spider):蜘蛛打開時(shí)調(diào)用 這個(gè)方法。
參數(shù):spider:自己編寫的 Spider ,即自定義的 Spider 類的對(duì)象
?
close_spider(self, spider):當(dāng)蜘蛛關(guān)閉時(shí)調(diào)用此方法。
參數(shù):spider:自己編寫的 Spider ,即自定義的 Spider 類的對(duì)象
?
from_crawler(cls, crawler):
如果存在,則此類方法被調(diào)用,通過?Crawler?來創(chuàng)建一個(gè) pipeline 實(shí)例。它必須返回管道的新實(shí)例。Crawler對(duì)象 提供對(duì)所有Scrapy核心組件(如設(shè)置和信號(hào))的訪問,它是管道訪問它們并將其功能掛鉤到Scrapy中的一種方式。
參數(shù):crawler(Crawlerobject) - 使用此管道的 crawler
?
爬蟲示例:
images.py:
# -*- coding: utf-8 -*- from scrapy import Spider, Request from urllib.parse import urlencode import jsonfrom images360.items import ImageItemclass ImagesSpider(Spider):name = 'images'allowed_domains = ['images.so.com']start_urls = ['http://images.so.com/']def start_requests(self):data = {'ch': 'photography', 'listtype': 'new'}base_url = 'https://image.so.com/zj?'for page in range(1, self.settings.get('MAX_PAGE') + 1):data['sn'] = page * 30params = urlencode(data)url = base_url + paramsyield Request(url, self.parse)def parse(self, response):result = json.loads(response.text)for image in result.get('list'):item = ImageItem()item['id'] = image.get('imageid')item['url'] = image.get('qhimg_url')item['title'] = image.get('group_title')item['thumb'] = image.get('qhimg_thumb_url')yield itemitems.py:
# -*- coding: utf-8 -*-# Define here the models for your scraped items # # See documentation in: # http://doc.scrapy.org/en/latest/topics/items.htmlfrom scrapy import Item, Fieldclass ImageItem(Item):collection = table = 'images'id = Field()url = Field()title = Field()thumb = Field()pipelines.py:
# -*- coding: utf-8 -*-# Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.htmlimport pymongo import pymysql from scrapy import Request from scrapy.exceptions import DropItem from scrapy.pipelines.images import ImagesPipelineclass MongoPipeline(object):def __init__(self, mongo_uri, mongo_db):self.mongo_uri = mongo_uriself.mongo_db = mongo_db@classmethoddef from_crawler(cls, crawler):return cls(mongo_uri=crawler.settings.get('MONGO_URI'),mongo_db=crawler.settings.get('MONGO_DB'))def open_spider(self, spider):self.client = pymongo.MongoClient(self.mongo_uri)self.db = self.client[self.mongo_db]def process_item(self, item, spider):name = item.collectionself.db[name].insert(dict(item))return itemdef close_spider(self, spider):self.client.close()class MysqlPipeline():def __init__(self, host, database, user, password, port):self.host = hostself.database = databaseself.user = userself.password = passwordself.port = port@classmethoddef from_crawler(cls, crawler):return cls(host=crawler.settings.get('MYSQL_HOST'),database=crawler.settings.get('MYSQL_DATABASE'),user=crawler.settings.get('MYSQL_USER'),password=crawler.settings.get('MYSQL_PASSWORD'),port=crawler.settings.get('MYSQL_PORT'),)def open_spider(self, spider):self.db = pymysql.connect(self.host, self.user, self.password, self.database, charset='utf8',port=self.port)self.cursor = self.db.cursor()def close_spider(self, spider):self.db.close()def process_item(self, item, spider):print(item['title'])data = dict(item)keys = ', '.join(data.keys())values = ', '.join(['%s'] * len(data))sql = 'insert into %s (%s) values (%s)' % (item.table, keys, values)self.cursor.execute(sql, tuple(data.values()))self.db.commit()return itemclass ImagePipeline(ImagesPipeline):def file_path(self, request, response=None, info=None):url = request.urlfile_name = url.split('/')[-1]return file_namedef item_completed(self, results, item, info):image_paths = [x['path'] for ok, x in results if ok]if not image_paths:raise DropItem('Image Downloaded Failed')return itemdef get_media_requests(self, item, info):yield Request(item['url'])?
?
?
項(xiàng)目管道示例
?
價(jià)格驗(yàn)證和丟棄項(xiàng)目沒有價(jià)格
?
讓我們來看看以下假設(shè)的管道,它調(diào)整 price那些不包括增值稅(price_excludes_vat屬性)的項(xiàng)目的屬性,并刪除那些不包含價(jià)格的項(xiàng)目:
from scrapy.exceptions import DropItemclass PricePipeline(object):vat_factor = 1.15def process_item(self, item, spider):if item['price']:if item['price_excludes_vat']:item['price'] = item['price'] * self.vat_factorreturn itemelse:raise DropItem("Missing price in %s" % item)將項(xiàng)目寫入JSON文件
以下管道將所有抓取的項(xiàng)目(來自所有蜘蛛)存儲(chǔ)到單個(gè)items.jl文件中,每行包含一個(gè)項(xiàng)目,以JSON格式序列化:
import jsonclass JsonWriterPipeline(object):def open_spider(self, spider):self.file = open('items.jl', 'wb')def close_spider(self, spider):self.file.close()def process_item(self, item, spider):line = json.dumps(dict(item)) + "\n"self.file.write(line)return item?
注意
JsonWriterPipeline的目的只是介紹如何編寫項(xiàng)目管道。如果您真的想要將所有抓取的項(xiàng)目存儲(chǔ)到JSON文件中,則應(yīng)使用Feed導(dǎo)出。
?
?
將項(xiàng)目寫入MongoDB
?
在這個(gè)例子中,我們使用pymongo將項(xiàng)目寫入MongoDB。MongoDB地址和數(shù)據(jù)庫(kù)名稱在Scrapy設(shè)置中指定; MongoDB集合以item類命名。
這個(gè)例子的要點(diǎn)是顯示如何使用from_crawler()方法和如何正確清理資源:
import pymongoclass MongoPipeline(object):collection_name = 'scrapy_items'def __init__(self, mongo_uri, mongo_db):self.mongo_uri = mongo_uriself.mongo_db = mongo_db@classmethoddef from_crawler(cls, crawler):return cls(mongo_uri=crawler.settings.get('MONGO_URI'),mongo_db=crawler.settings.get('MONGO_DATABASE', 'items'))def open_spider(self, spider):self.client = pymongo.MongoClient(self.mongo_uri)self.db = self.client[self.mongo_db]def close_spider(self, spider):self.client.close()def process_item(self, item, spider):self.db[self.collection_name].insert(dict(item))return item?
拍攝項(xiàng)目的屏幕截圖
?
此示例演示如何從方法返回Deferredprocess_item()。它使用Splash來呈現(xiàn)項(xiàng)目網(wǎng)址的屏幕截圖。Pipeline請(qǐng)求本地運(yùn)行的Splash實(shí)例。在請(qǐng)求被下載并且Deferred回調(diào)觸發(fā)后,它將項(xiàng)目保存到一個(gè)文件并將文件名添加到項(xiàng)目。
import scrapy import hashlib from urllib.parse import quoteclass ScreenshotPipeline(object):"""Pipeline that uses Splash to render screenshot ofevery Scrapy item."""SPLASH_URL = "http://localhost:8050/render.png?url={}"def process_item(self, item, spider):encoded_item_url = quote(item["url"])screenshot_url = self.SPLASH_URL.format(encoded_item_url)request = scrapy.Request(screenshot_url)dfd = spider.crawler.engine.download(request, spider)dfd.addBoth(self.return_item, item)return dfddef return_item(self, response, item):if response.status != 200:# Error happened, return item.return item# Save screenshot to file, filename will be hash of url.url = item["url"]url_hash = hashlib.md5(url.encode("utf8")).hexdigest()filename = "{}.png".format(url_hash)with open(filename, "wb") as f:f.write(response.body)# Store filename in item.item["screenshot_filename"] = filenamereturn item?
重復(fù)過濾器
?
用于查找重復(fù)項(xiàng)目并刪除已處理的項(xiàng)目的過濾器。假設(shè)我們的項(xiàng)目具有唯一的ID,但是我們的蜘蛛會(huì)返回具有相同id的多個(gè)項(xiàng)目:
from scrapy.exceptions import DropItemclass DuplicatesPipeline(object):def __init__(self):self.ids_seen = set()def process_item(self, item, spider):if item['id'] in self.ids_seen:raise DropItem("Duplicate item found: %s" % item)else:self.ids_seen.add(item['id'])return item?
激活項(xiàng)目管道組件
?
要激活項(xiàng)目管道組件,必須將其類添加到 ITEM_PIPELINES設(shè)置,類似于以下示例:
ITEM_PIPELINES = {'myproject.pipelines.PricePipeline': 300,'myproject.pipelines.JsonWriterPipeline': 800, }您在此設(shè)置中分配給類的整數(shù)值確定它們運(yùn)行的??順序:項(xiàng)目從較低值到較高值類。通常將這些數(shù)字定義在0-1000范圍內(nèi)。
?
?
?
總結(jié)
以上是生活随笔為你收集整理的Scrapy-Item Pipeline(项目管道)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 安卓逆向_21 --- Java层和so
- 下一篇: CompletableFuture详解~