Scrapy(1) 爬取起点中文网小说,并保存到数据库
爬取起點中文網(wǎng)小說
Scrapy框架結(jié)構(gòu)
- 引擎(ENGINE)
- 調(diào)度器(SCHEDULER)
- 下載器(DOWNLOADER)
- 爬蟲(SPIDERS)
- 項目管道(ITEM PIPELINES)
- 下載器中間件(Downloader Middlewares)
- 爬蟲中間件(Spider Middlewares)
需求分析
目標網(wǎng)站 https://www.qidian.com/rank/hotsales?style=1&page=1
提取內(nèi)容為:小說名稱、作者、類型和形式
項目
創(chuàng)建項目,在命令行定位到放項目的目錄
scrapy startproject qidian_hot打開pycharm
在spiders目錄下新建爬蟲源文件qidian_hot_spider.py,代碼如下
item.py
class QidianHotItem(scrapy.Item):# define the fields for your item here like:name = scrapy.Field()author = scrapy.Field()type = scrapy.Field()form = scrapy.Field()pipelines.py
先在setting.py中把管道的注釋去掉,大概在67行
from itemadapter import ItemAdapter from scrapy.exceptions import DropItemclass QidianHotPipeline:def __init__(self):self.author_set = set()def process_item(self, item, spider):if item["name"] in self.author_set:raise DropItem("查找到重復姓名的項目:%s" % item)return item運行爬蟲,命令行模式下輸入,自動生成csv文件
scrapy crawl hot -o hot.csv保存到MySQL數(shù)據(jù)庫
在本機MySQL添加數(shù)據(jù)庫qidian,添加表hot(表名稱和爬蟲源文件qidian_hot_spider.py中的類HotSalesSpider的name屬性要一致),然后添加字段
安裝mysqlclient
pip install mysqlclient
先在配置文件settings.py添加數(shù)據(jù)庫的變量
MYSQL_DB_NAME = 'qidian' # 對應(yīng)數(shù)據(jù)庫中的表名稱 MYSQL_HOST = 'localhost' MYSQL_USER = 'root' MYSQL_PASSWORD = '123456'然后在pipelines.py中添加代碼
import MySQLdb from itemadapter import ItemAdapter from scrapy.exceptions import DropItemclass QidianHotPipeline:def __init__(self):self.author_set = set()def process_item(self, item, spider):if item["name"] in self.author_set:raise DropItem("查找到重復姓名的項目:%s" % item)return itemclass MySQlPipeline(object):def open_spider(self, spider): # 在spider開始之前,調(diào)用一次db_name = spider.settings.get('MYSQL_DB_NAME', 'qidian')host = spider.settings.get('MYSQL_HOST', 'localhost')user = spider.settings.get('MYSQL_USER', 'root')pwd = spider.settings.get('MYSQL_PASSWORD', '123456')# 鏈接數(shù)據(jù)庫self.db_conn = MySQLdb.connect(db=db_name,host=host,user=user,password=pwd,charset="utf8")self.db_cursor = self.db_conn.cursor() # 得到游標def process_item(self, item, spider): # 處理每一個itemvalues = (item["name"], item["author"], item["type"], item["form"])# 確定sqlsql = "insert into hot(name,author,type,form) values(%s, %s, %s, %s)"self.db_cursor.execute(sql, values)return itemdef close_spider(self, spider): # 在spider結(jié)束時,調(diào)用一次self.db_conn.commit() # 提交數(shù)據(jù)self.db_cursor.close()self.db_conn.close()在配置文件settings.py添加管道,67行
ITEM_PIPELINES = {'qidian_hot.pipelines.QidianHotPipeline': 300,'qidian_hot.pipelines.MySQlPipeline': 400, }新建start.py,添加代碼
from scrapy import cmdlinecmdline.execute("scrapy crawl hot".split())運行start.py,實現(xiàn)了數(shù)據(jù)的永久保存,打開數(shù)據(jù)表查看
保存到MongoDB數(shù)據(jù)庫
安裝 pymongo
settings.py中配置
ITEM_PIPELINES = {'qidian_hot.pipelines.QidianHotPipeline': 300,# 'qidian_hot.pipelines.MySQlPipeline': 400,'qidian_hot.pipelines.MongoDBPipeline': 400, }# MongoDB MONGODB_HOST = "localhost" MONGODB_PORT = 27017 MONGODB_NAME = "qidian" MONGODB_COLLECTION = "hot"pipelines.py添加代碼
import pymongoclass MongoDBPipeline(object):def open_spider(self, spider): # 在spider開始之前,調(diào)用一次host = spider.settings.get("MONGODB_HOST", "localhost")port = spider.settings.get("MONGODB_PORT", 27017)db_name = spider.settings.get("MONGODB_NAME", "qidian")collection_name = spider.settings.get("MONGODB_COLLECTION", "hot")self.db_client = pymongo.MongoClient(host=host, port=port) # 客戶端對象# 指定數(shù)據(jù)庫self.db = self.db_client[db_name]# 指定集合self.db_collection = self.db[collection_name]def process_item(self, item, spider): # 處理每一個itemitem_dict = dict(item)self.db_collection.insert_one(item_dict)def close_spider(self, spider): # 在spider結(jié)束時,調(diào)用一次self.db_client.close()打開start.py,運行,打開 MongoDB compass 查看
《從零開始學Scrapy網(wǎng)絡(luò)爬蟲》,這本書挺不錯的,有原理,有源碼,有視頻,還有ppt
總結(jié)
以上是生活随笔為你收集整理的Scrapy(1) 爬取起点中文网小说,并保存到数据库的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 升级打怪小游戏(面向对象)
- 下一篇: java下bin目下的exe