Part2_1 Urllib的get请求和post请求
生活随笔
收集整理的這篇文章主要介紹了
Part2_1 Urllib的get请求和post请求
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
import urllib.request#獲取一個get請求
response = urllib.request.urlopen("http://www.baidu.com")
print(response.read().decode('utf-8')) #對獲取到的網頁源碼進行utf-8解碼#獲取一個post請求import urllib.parse
data = bytes(urllib.parse.urlencode({"hello":"world"}),encoding = "utf-8")
reponse = urllib.request.urlopen("http://httpbin.org/post",data = data)
print(reponse.read().decode("utf-8"))#超時處理
try:reponse = urllib.request.urlopen("http://httpbin.org/get",timeout=0.01)print(reponse.read().decode("utf-8"))except urllib.error.URLError as e:print("time out")response = urllib.request.urlopen("http://www.baidu.com")
# print(response.status)
print(response.getheader("Server"))#測試模仿真實瀏覽器
url = "http://httpbin.org/post"
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36"}
data = bytes(urllib.parse.urlencode({"name":"jeffchenitm"}),encoding = "utf-8")
req = urllib.request.Request(url = url,data = data,headers= headers,method="POST")
response = urllib.request.urlopen(req)
print(response.read().decode("utf-8"))#真實訪問豆瓣,如果不做更改,將會被識別出來是爬蟲,會報錯418
url = "http://www.douban.com"
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36"}req = urllib.request.Request(url = url,headers= headers)
response = urllib.request.urlopen(req)
print(response.read().decode("utf-8"))
測試網站:
http://httpbin.org總結
以上是生活随笔為你收集整理的Part2_1 Urllib的get请求和post请求的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Part1_4 python函数、文件操
- 下一篇: Part2_2 Bs4常见操作