爬虫基础, 乱码问题, jupyter, urllib, requests, lxml, multiprocessing并发, session, beautifulsoup...
碰到亂碼時解決方法
requests.get().text是根據HTML文件的headers中的編碼來解碼的, 出現亂碼需要自己用content來獲取信息然后解碼
res = res.encode('iso-8859-1').decode('gbk') # 不知道用什么解碼時, 就用這個, 一般html的header中有charset html = r.content html = str(html,'utf-8') #html_doc=html.decode("utf-8","ignore")r = requests.get("http://www.baidu.com") r.encoding='utf-8' html=r.text方法二(如果還是不行, 用這方法):
# -*-coding:utf8-*-import requestsreq = requests.get("http://news.sina.com.cn/")if req.encoding == 'ISO-8859-1':encodings = requests.utils.get_encodings_from_content(req.text)if encodings:encoding = encodings[0]else:encoding = req.apparent_encoding# encode_content = req.content.decode(encoding, 'replace').encode('utf-8', 'replace')global encode_contentencode_content = req.content.decode(encoding, 'replace') #如果設置為replace,則會用?取代非法字符;print(encode_content)with open('test.html','w',encoding='utf-8') as f:f.write(encode_content) --------------------- 作者:chaowanghn 來源:CSDN 原文:https://blog.csdn.net/chaowanghn/article/details/54889835 版權聲明:本文為博主原創文章,轉載請附上博文鏈接!?
Jupyter快捷鍵
- 插入cell: a b
- 刪除: x
- 執行:shift+enter
- tab:
- cell模式切換: y(m->code) m(code->m)
- shift+tab:打開幫助文檔
爬蟲的分類:
- 通用爬蟲:
- 聚焦爬蟲:
- 增量式:
?
不需要with open寫入文件的寫法: urllib
import urllib urllib.request.urlretrieve(url, 'a.jpg') View Code?
requests.get?requests.post
# get
for i in range(5):
param = {
'type': 'tv',
'tag':'熱門',
'sort': 'recommend',
'page_limit': 20,
'page_start': i,
}
cont = requests.get(url, params=param).json()
print(cont)
# post
url = 'https://fanyi.baidu.com/sug'
wd = input('enter a word:')
data = {
'kw':wd
}
response = requests.post(url=url,data=data)
?
headers字典
headers = {'Connection':'close', #當請求成功后,馬上斷開該次請求(及時釋放請求池中的資源)'User-Agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.119 Safari/537.36' }?
獲取requests爬取結果
content = requests.get(url, params=param) a = content.text # 字符串 b = content.content # b'' c = content.json() # json格式?
etree
有的時候找不到href對應的位置, 可以在返回的內容里面搜一下 .mp4 或者其他格式, 還可以搜一下自己按了鏈接點出來的href, 還有js里面也可能有
from lxml import etree import requests import base64 import randomtext = requests.get(url, headers).text # 獲取網頁文本信息 tree = etree.HTML(text) # etree的實例化 lis = tree.xpath('/html/body/div[5]/div[5]/div[1]/ul/li') # xpath解析 for el in lis:a = el.xpath('./div[2]/h2/a/text()') # 第二次解析時, 要在/div前加個. 否則會從html的最開始解析, 在不加text()時, 輸出一個列表, 可以用[0]來獲取a = el.xpath('./div[3]//text()')print(a)for el in lis:a = el.xpath('./a/img/@src')[0] # xpath是個列表, 要用[0]來獲取元素print(a)
res = requests.get(url, headers)
res = res.text
res = res.encode('iso-8859-1').decode('gbk') # 不知道用什么解碼時, 就用這個, 一般html的header中有charset
# 在一個網頁的etree中, 獲取適合多個解析式的內容
li_list = tree.xpath('//div[@class="bottom"]/ul/li |? //div[@class="bottom"]/ul/div[2]/li')
# 下載文件 data = requests.get(url=download_url,headers=headers).contentfileName = name+'.rar'with open(fileName,'wb') as fp:fp.write(data)
?
并發
from multiprocessing.dummy import Pool pool = Pool(5) pool.map(getvideo, lst)?
二維碼識別
用云打碼識別
import http.client, mimetypes, urllib, json, time, requests######################################################################class YDMHttp:apiurl = 'http://api.yundama.com/api.php'username = ''password = ''appid = ''appkey = ''def __init__(self, username, password, appid, appkey):self.username = username self.password = passwordself.appid = str(appid)self.appkey = appkeydef request(self, fields, files=[]):response = self.post_url(self.apiurl, fields, files)response = json.loads(response)return responsedef balance(self):data = {'method': 'balance', 'username': self.username, 'password': self.password, 'appid': self.appid, 'appkey': self.appkey}response = self.request(data)if (response):if (response['ret'] and response['ret'] < 0):return response['ret']else:return response['balance']else:return -9001def login(self):data = {'method': 'login', 'username': self.username, 'password': self.password, 'appid': self.appid, 'appkey': self.appkey}response = self.request(data)if (response):if (response['ret'] and response['ret'] < 0):return response['ret']else:return response['uid']else:return -9001def upload(self, filename, codetype, timeout):data = {'method': 'upload', 'username': self.username, 'password': self.password, 'appid': self.appid, 'appkey': self.appkey, 'codetype': str(codetype), 'timeout': str(timeout)}file = {'file': filename}response = self.request(data, file)if (response):if (response['ret'] and response['ret'] < 0):return response['ret']else:return response['cid']else:return -9001def result(self, cid):data = {'method': 'result', 'username': self.username, 'password': self.password, 'appid': self.appid, 'appkey': self.appkey, 'cid': str(cid)}response = self.request(data)return response and response['text'] or ''def decode(self, filename, codetype, timeout):cid = self.upload(filename, codetype, timeout)if (cid > 0):for i in range(0, timeout):result = self.result(cid)if (result != ''):return cid, resultelse:time.sleep(1)return -3003, ''else:return cid, ''def report(self, cid):data = {'method': 'report', 'username': self.username, 'password': self.password, 'appid': self.appid, 'appkey': self.appkey, 'cid': str(cid), 'flag': '0'}response = self.request(data)if (response):return response['ret']else:return -9001def post_url(self, url, fields, files=[]):for key in files:files[key] = open(files[key], 'rb');res = requests.post(url, files=files, data=fields)return res.text####################################################################### 用戶名 username = 'username'# 密碼 password = 'password' # 軟件ID,開發者分成必要參數。登錄開發者后臺【我的軟件】獲得! appid = 1 # 軟件密鑰,開發者分成必要參數。登錄開發者后臺【我的軟件】獲得! appkey = '22cc5376925e9387a23cf797cb9ba745' # 圖片文件 filename = 'getimage.jpg' # 驗證碼類型,# 例:1004表示4位字母數字,不同類型收費不同。請準確填寫,否則影響識別率。在此查詢所有類型 http://www.yundama.com/price.html codetype = 1004# 超時時間,秒 timeout = 60 # 檢查 if (username == 'username'):print('請設置好相關參數再測試') else:# 初始化yundama = YDMHttp(username, password, appid, appkey)# 登陸云打碼uid = yundama.login();print('uid: %s' % uid)# 查詢余額balance = yundama.balance();print('balance: %s' % balance)# 開始識別,圖片路徑,驗證碼類型ID,超時時間(秒),識別結果cid, result = yundama.decode(filename, codetype, timeout);print('cid: %s, result: %s' % (cid, result))###################################################################### View Code?
session
session = requests.Session() session.get(url, params) session.post(login_url, data, headers) 和requests發請求相比, session會更耗內存, 但是session可以存儲接收到的cookie?
BeautifulSoup
soup = BeautifulSoup(content, 'lxml')a_list = soup.select('.book-mulu > ul > li > a') # 選擇標簽 list內部是soup對象text = soup.find('div',class_='chapter_content').text # 獲取子類和子類的子類...的text 使用流程: - 導包:from bs4 import BeautifulSoup- 使用方式:可以將一個html文檔,轉化為BeautifulSoup對象,然后通過對象的方法或者屬性去查找指定的節點內容(1)轉化本地文件:- soup = BeautifulSoup(open('本地文件'), 'lxml')(2)轉化網絡文件:- soup = BeautifulSoup('字符串類型或者字節類型', 'lxml')(3)打印soup對象顯示內容為html文件中的內容方法:(1)根據標簽名查找- soup.a 只能找到第一個符合要求的標簽(2)獲取屬性- soup.a.attrs 獲取a所有的屬性和屬性值,返回一個字典- soup.a.attrs['href'] 獲取href屬性- soup.a['href'] 也可簡寫為這種形式(3)獲取內容- soup.a.string- soup.a.text- soup.a.get_text()【注意】如果標簽還有標簽,那么string獲取到的結果為None,而其它兩個,可以獲取文本內容(4)find:找到第一個符合要求的標簽- soup.find('a') 找到第一個符合要求的- soup.find('a', title="xxx").text- soup.find('a', alt="xxx").content- soup.find('a', class_="xxx")- soup.find('a', id="xxx")(5)find_all:找到所有符合要求的標簽- soup.find_all('a')- soup.find_all(['a','b']) 找到所有的a和b標簽- soup.find_all('a', limit=2) 限制前兩個(6)根據選擇器選擇指定的內容select:soup.select('#feng')- 常見的選擇器:標簽選擇器(a)、類選擇器(.)、id選擇器(#)、層級選擇器- 層級選擇器:div .dudu #lala .meme .xixi 下面好多級div > p > a > .lala 只能是下面一級【注意】select選擇器返回永遠是列表,需要通過下標提取指定的對象
?
轉載于:https://www.cnblogs.com/NachoLau/p/10440198.html
總結
以上是生活随笔為你收集整理的爬虫基础, 乱码问题, jupyter, urllib, requests, lxml, multiprocessing并发, session, beautifulsoup...的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 几款xshell绝佳配色方案
- 下一篇: react native 原生模块桥接的