生活随笔
收集整理的這篇文章主要介紹了
Pytorch 神经网络nn模块
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
文章目錄
- 1. nn模塊
- 2. torch.optim 優化器
- 3. 自定義nn模塊
- 4. 權重共享
參考 http://pytorch123.com/
1. nn模塊
import torch
N
, D_in
, Hidden_size
, D_out
= 64, 1000, 100, 10
- torch.nn.Sequential 建立模型,跟 keras 很像
x
= torch
.randn
(N
, D_in
)
y
= torch
.randn
(N
, D_out
)model
= torch
.nn
.Sequential
(torch
.nn
.Linear
(D_in
, Hidden_size
),torch
.nn
.ReLU
(),torch
.nn
.Linear
(Hidden_size
, D_out
)
)
loss_fn
= torch
.nn
.MSELoss
(reduction
='sum')
learning_rate
= 1e-4
loss_list
= []for t
in range(500):y_pred
= model
(x
) loss
= loss_fn
(y_pred
, y
) loss_list
.append
(loss
.item
())print(t
, loss
.item
())model
.zero_grad
() loss
.backward
() with torch
.no_grad
(): for param
in model
.parameters
():param
-= learning_rate
*param
.grad
import pandas
as pd
loss_curve
= pd
.DataFrame
(loss_list
, columns
=['loss'])
loss_curve
.plot
()
2. torch.optim 優化器
- torch.optim.Adam 使用優化器
- optimizer.zero_grad() # 清零梯度
- optimizer.step() # 更新參數
learning_rate
= 1e-4optimizer
= torch
.optim
.Adam
(model
.parameters
(), lr
=learning_rate
)loss_list
= []
for t
in range(500):y_pred
= model
(x
) loss
= loss_fn
(y_pred
, y
) loss_list
.append
(loss
.item
())print(t
, loss
.item
())optimizer
.zero_grad
() loss
.backward
() optimizer
.step
()
3. 自定義nn模塊
- 繼承 nn.module,并定義 forward 前向傳播函數
import torch
class myModel(torch
.nn
.Module
):def __init__(self
, D_in
, Hidden_size
, D_out
):super(myModel
, self
).__init__
()self
.fc1
= torch
.nn
.Linear
(D_in
, Hidden_size
)self
.fc2
= torch
.nn
.Linear
(Hidden_size
, D_out
)def forward(self
, x
):x
= self
.fc1
(x
).clamp
(min=0) x
= self
.fc2
(x
)return x
N
, D_in
, Hidden_size
, D_out
= 64, 1000, 100, 10
x
= torch
.randn
(N
, D_in
)
y
= torch
.randn
(N
, D_out
)model
= myModel
(D_in
, Hidden_size
, D_out
) loss_fn
= torch
.nn
.MSELoss
(reduction
='sum')
optimizer
= torch
.optim
.SGD
(model
.parameters
(), lr
=1e-4)loss_val
= []for t
in range(500):y_pred
= model
(x
)loss
= loss_fn
(y_pred
, y
)loss_val
.append
(loss
.item
())optimizer
.zero_grad
()loss
.backward
()optimizer
.step
()import pandas
as pd
loss_val
= pd
.DataFrame
(loss_val
, columns
=['loss'])
loss_val
.plot
()
4. 權重共享
- 建立一個有3種FC層的玩具模型,中間 shareFC層會被 for 循環重復 0-3 次(隨機),這幾層(次數隨機)的參數是共享的
import random
import torch
class shareParamsModel(torch
.nn
.Module
):def __init__(self
, D_in
, Hidden_size
, D_out
):super(shareParamsModel
, self
).__init__
()self
.inputFC
= torch
.nn
.Linear
(D_in
, Hidden_size
)self
.shareFC
= torch
.nn
.Linear
(Hidden_size
, Hidden_size
)self
.outputFC
= torch
.nn
.Linear
(Hidden_size
, D_out
)self
.sharelayers
= 0 def forward(self
, x
):x
= self
.inputFC
(x
).clamp
(min=0)self
.sharelayers
= 0for _
in range(random
.randint
(0, 3)):x
= self
.shareFC
(x
).clamp
(min=0)self
.sharelayers
+= 1x
= self
.outputFC
(x
)return x
N
, D_in
, Hidden_size
, D_out
= 64, 1000, 100, 10
x
= torch
.randn
(N
, D_in
)
y
= torch
.randn
(N
, D_out
)model
= shareParamsModel
(D_in
, Hidden_size
, D_out
)loss_fn
= torch
.nn
.MSELoss
(reduction
='sum')optimizer
= torch
.optim
.SGD
(model
.parameters
(), lr
=1e-4, momentum
=0.9)loss_val
= []for t
in range(500):y_pred
= model
(x
)print('share layers: ', model
.sharelayers
)loss
= loss_fn
(y_pred
, y
)loss_val
.append
(loss
.item
())optimizer
.zero_grad
()loss
.backward
()optimizer
.step
()for p
in model
.parameters
():print(p
.size
())import pandas
as pd
loss_val
= pd
.DataFrame
(loss_val
, columns
=['loss'])
loss_val
.plot
()
輸出:
share layers
: 1
share layers
: 0
share layers
: 2
share layers
: 1
share layers
: 2
share layers
: 1
share layers
: 0
share layers
: 1
share layers
: 0
share layers
: 0
share layers
: 3
share layers
: 3
。。。省略
參數數量,多次運行,均為以下結果
torch
.Size
([100, 1000])
torch
.Size
([100])
torch
.Size
([100, 100])
torch
.Size
([100])
torch
.Size
([10, 100])
torch
.Size
([10])
總結
以上是生活随笔為你收集整理的Pytorch 神经网络nn模块的全部內容,希望文章能夠幫你解決所遇到的問題。
如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。