python爬虫---requests库的用法
requests是python實現(xiàn)的簡單易用的HTTP庫,使用起來比urllib簡潔很多
因為是第三方庫,所以使用前需要cmd安裝
pip install requests
安裝完成后import一下,正常則說明可以開始使用了。
基本用法:
requests.get()用于請求目標網站,類型是一個HTTPresponse類型
import requestsresponse = requests.get('http://www.baidu.com') print(response.status_code) # 打印狀態(tài)碼 print(response.url) # 打印請求url print(response.headers) # 打印頭信息 print(response.cookies) # 打印cookie信息 print(response.text) #以文本形式打印網頁源碼 print(response.content) #以字節(jié)流形式打印運行結果:
狀態(tài)碼:200
url:www.baidu.com
headers信息
?
?
?各種請求方式:
import requestsrequests.get('http://httpbin.org/get') requests.post('http://httpbin.org/post') requests.put('http://httpbin.org/put') requests.delete('http://httpbin.org/delete') requests.head('http://httpbin.org/get') requests.options('http://httpbin.org/get')?
基本的get請求
import requestsresponse = requests.get('http://httpbin.org/get') print(response.text)結果
?
?
?
帶參數(shù)的GET請求:
第一種直接將參數(shù)放在url內
import requestsresponse = requests.get(http://httpbin.org/get?name=gemey&age=22) print(response.text)結果
另一種先將參數(shù)填寫在dict中,發(fā)起請求時params參數(shù)指定為dict
import requestsdata = {'name': 'tom','age': 20 }response = requests.get('http://httpbin.org/get', params=data) print(response.text)結果同上
?
解析json
import requestsresponse = requests.get('http://httpbin.org/get') print(response.text) print(response.json()) #response.json()方法同json.loads(response.text) print(type(response.json()))結果
?
簡單保存一個二進制文件
二進制內容為response.content
import requestsresponse = requests.get('http://img.ivsky.com/img/tupian/pre/201708/30/kekeersitao-002.jpg') b = response.content with open('F://fengjing.jpg','wb') as f:f.write(b)?
為你的請求添加頭信息
import requests heads = {} heads['User-Agent'] = 'Mozilla/5.0 ' \'(Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 ' \'(KHTML, like Gecko) Version/5.1 Safari/534.50' response = requests.get('http://www.baidu.com',headers=headers)?
使用代理
同添加headers方法,代理參數(shù)也要是一個dict
這里使用requests庫爬取了IP代理網站的IP與端口和類型
因為是免費的,使用的代理地址很快就失效了。
import requests import redef get_html(url):proxy = {'http': '120.25.253.234:812','https' '163.125.222.244:8123'}heads = {}heads['User-Agent'] = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.221 Safari/537.36 SE 2.X MetaSr 1.0'req = requests.get(url, headers=heads,proxies=proxy)html = req.textreturn htmldef get_ipport(html):regex = r'<td data-title="IP">(.+)</td>'iplist = re.findall(regex, html)regex2 = '<td data-title="PORT">(.+)</td>'portlist = re.findall(regex2, html)regex3 = r'<td data-title="類型">(.+)</td>'typelist = re.findall(regex3, html)sumray = []for i in iplist:for p in portlist:for t in typelist:passpassa = t+','+i + ':' + psumray.append(a)print('高匿代理')print(sumray)if __name__ == '__main__':url = 'http://www.kuaidaili.com/free/'get_ipport(get_html(url))結果:
?
?
基本POST請求:
import requestsdata = {'name':'tom','age':'22'}response = requests.post('http://httpbin.org/post', data=data)?
?獲取cookie
#獲取cookie import requestsresponse = requests.get('http://www.baidu.com') print(response.cookies) print(type(response.cookies)) for k,v in response.cookies.items():print(k+':'+v)結果:
?
?
會話維持
import requestssession = requests.Session() session.get('http://httpbin.org/cookies/set/number/12345') response = session.get('http://httpbin.org/cookies') print(response.text)結果:
?
證書驗證設置
import requests from requests.packages import urllib3urllib3.disable_warnings() #從urllib3中消除警告 response = requests.get('https://www.12306.cn',verify=False) #證書驗證設為FALSE print(response.status_code)打印結果:200?
超時異常捕獲
import requests from requests.exceptions import ReadTimeouttry:res = requests.get('http://httpbin.org', timeout=0.1)print(res.status_code) except ReadTimeout:print(timeout)?
異常處理
在你不確定會發(fā)生什么錯誤時,盡量使用try...except來捕獲異常
所有的requests exception:
Exceptions
import requests from requests.exceptions import ReadTimeout,HTTPError,RequestExceptiontry:response = requests.get('http://www.baidu.com',timeout=0.5)print(response.status_code) except ReadTimeout:print('timeout') except HTTPError:print('httperror') except RequestException:print('reqerror')來源:https://www.cnblogs.com/mzc1997/p/7813801.html
總結
以上是生活随笔為你收集整理的python爬虫---requests库的用法的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 花呗十号还款算逾期吗?10号当天还款可以
- 下一篇: 零钱通转出到账时间