Python爬虫入门教程 33-100 《海王》评论数据抓取 scrapy
生活随笔
收集整理的這篇文章主要介紹了
Python爬虫入门教程 33-100 《海王》评论数据抓取 scrapy
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
1. 海王評論數據爬取前分析
海王上映了,然后口碑炸了,對咱來說,多了一個可爬可分析的電影,美哉~
摘錄一個評論
零點場剛看完,溫導的電影一直很不錯,無論是速7,電鋸驚魂還是招魂都很棒。打斗和音效方面沒話說非常棒,特別震撼。總之,DC扳回一分( ̄▽ ̄)。比正義聯盟好的不止一點半點(我個人感覺)。還有艾梅伯希爾德是真的漂亮,溫導選的人都很棒。 真的第一次看到這么牛逼的電影 轉場特效都吊炸天
2. 海王案例開始爬取數據
數據爬取的依舊是貓眼的評論,這部分內容咱們用把牛刀,scrapy爬取,一般情況下,用一下requests就好了
抓取地址
http://m.maoyan.com/mmdb/comments/movie/249342.json?_v_=yes&offset=15&startTime=2018-12-11%2009%3A58%3A43 復制代碼關鍵參數
url:http://m.maoyan.com/mmdb/comments/movie/249342.json offset:15 startTime:起始時間 復制代碼scrapy 爬取貓眼代碼特別簡單,我分開幾個py文件即可。
Haiwang.py
import scrapy import json from haiwang.items import HaiwangItemclass HaiwangSpider(scrapy.Spider):name = 'Haiwang'allowed_domains = ['m.maoyan.com']start_urls = ['http://m.maoyan.com/mmdb/comments/movie/249342.json?_v_=yes&offset=0&startTime=0']def parse(self, response):print(response.url)body_data = response.body_as_unicode()js_data = json.loads(body_data)item = HaiwangItem()for info in js_data["cmts"]:item["nickName"] = info["nickName"]item["cityName"] = info["cityName"] if "cityName" in info else ""item["content"] = info["content"]item["score"] = info["score"]item["startTime"] = info["startTime"]item["approve"] = info["approve"]item["reply"] = info["reply"]item["avatarurl"] = info["avatarurl"]yield itemyield scrapy.Request("http://m.maoyan.com/mmdb/comments/movie/249342.json?_v_=yes&offset=0&startTime={}".format(item["startTime"]),callback=self.parse)復制代碼setting.py
設置需要配置headers
DEFAULT_REQUEST_HEADERS = {"Referer":"http://m.maoyan.com/movie/249342/comments?_v_=yes","User-Agent":"Mozilla/5.0 Chrome/63.0.3239.26 Mobile Safari/537.36","X-Requested-With":"superagent" } 復制代碼需要配置一些抓取條件
# Obey robots.txt rules ROBOTSTXT_OBEY = False # See also autothrottle settings and docs DOWNLOAD_DELAY = 1 # Disable cookies (enabled by default) COOKIES_ENABLED = False 復制代碼開啟管道
# Configure item pipelines # See https://doc.scrapy.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES = {'haiwang.pipelines.HaiwangPipeline': 300, }復制代碼items.py 獲取你想要的數據
import scrapyclass HaiwangItem(scrapy.Item):# define the fields for your item here like:# name = scrapy.Field()nickName = scrapy.Field()cityName = scrapy.Field()content = scrapy.Field()score = scrapy.Field()startTime = scrapy.Field()approve = scrapy.Field()reply =scrapy.Field()avatarurl = scrapy.Field() 復制代碼pipelines.py 保存數據,數據存儲到csv文件中
import os import csvclass HaiwangPipeline(object):def __init__(self):store_file = os.path.dirname(__file__) + '/spiders/haiwang.csv'self.file = open(store_file, "a+", newline="", encoding="utf-8")self.writer = csv.writer(self.file)def process_item(self, item, spider):try:self.writer.writerow((item["nickName"],item["cityName"],item["content"],item["approve"],item["reply"],item["startTime"],item["avatarurl"],item["score"]))except Exception as e:print(e.args)def close_spider(self, spider):self.file.close()復制代碼begin.py 編寫運行腳本
from scrapy import cmdline cmdline.execute(("scrapy crawl Haiwang").split())復制代碼走起,搞定,等著數據來到,就可以了
總結
以上是生活随笔為你收集整理的Python爬虫入门教程 33-100 《海王》评论数据抓取 scrapy的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 介绍什么是极限编程?
- 下一篇: 为什么S/4HANA的销售订单创建会触发