生活随笔
收集整理的這篇文章主要介紹了
pytorch 和 tensorflow2.0 方法替换
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
Embedding初始化
pytorch: Embedding()
tf2.0: random.normal()
def confirm(weight
):mean
= np
.sum(weight
) / dim
print("均值: {}".format(mean
))square_sum
= np
.sum((mean
- weight
) ** 2)print("方差: {}".format(square_sum
/ dim
))dim
= 1000000 embd
= nn
.Embedding
(5, dim
)
weight
= embd
.weight
.data
[0].numpy
()
confirm
(weight
)embd2
= tf
.Variable
(tf
.random
.normal
([5, dim
]))
weight2
= embd2
.numpy
()[0]
confirm
(weight2
)
張量初始化
pytorch: xavier_uniform_()
tf2.0: GlorotUniform()
def confirm(weight
):mean
= np
.sum(weight
) / dim
print("均值: {}".format(mean
))square_sum
= np
.sum((mean
- weight
) ** 2)print("方差: {}".format(square_sum
/ dim
))dim
= 1000000 w
= nn
.Parameter
(torch
.zeros
(size
=(3, dim
)))
nn
.init
.xavier_uniform_
(w
.data
)
weight
= w
.data
[0].numpy
()
confirm
(weight
)initializer
= tf
.initializers
.GlorotUniform
()
w2
= tf
.Variable
(initializer
(shape
=[3, dim
]))
weight2
= w2
[0].numpy
()
confirm
(weight2
)
多分類交叉熵損失
pytorch: CrossEntropyLoss()
tf2.0: categorical_crossentropy()
input = np
.random
.random
((3,3))
input_p
= torch
.tensor
(input)
input_t
= tf
.convert_to_tensor
(input)target_p
= torch
.tensor
([1,2,2])
target_t1
= tf
.keras
.utils
.to_categorical
([1,2,2])
target_t2
= tf
.constant
([1,2,2])
target_t3
= tf
.one_hot
([1,2,2], depth
=3)p_f
= torch
.nn
.CrossEntropyLoss
()
loss1
= p_f
(input_p
,target_p
)
print(loss1
)
loss2
= tf
.losses
.categorical_crossentropy
(y_true
=target_t1
,y_pred
=tf
.nn
.softmax
(input_t
,axis
=1))
print(tf
.reduce_mean
(loss2
))
loss3
= tf
.keras
.losses
.sparse_categorical_crossentropy
(y_true
=target_t2
, y_pred
=tf
.nn
.softmax
(input_t
,axis
=1))
print(tf
.reduce_mean
(loss3
))
loss4
= tf
.keras
.losses
.categorical_crossentropy
(y_true
=target_t3
, y_pred
=tf
.nn
.softmax
(input_t
,axis
=1))
print(tf
.reduce_mean
(loss4
))
二分類交叉熵損失
pytorch: BCEWithLogitsLoss()
tf2.0: sigmoid_cross_entropy_with_logits()
input = np
.random
.random
((3,3))
input_p
= torch
.tensor
(input)
input_t
= tf
.convert_to_tensor
(input)target
= np
.array
([[0.,1.,1.],[0.,0.,1.],[1.,0.,1.]])
target_p
= torch
.tensor
(target
)
target_t
= tf
.convert_to_tensor
(target
)p_f
= torch
.nn
.BCEWithLogitsLoss
()
loss1
= p_f
(input_p
,target_p
)
print(loss1
)
loss2
= tf
.nn
.sigmoid_cross_entropy_with_logits
(logits
=input_t
, labels
=target_t
)
print(tf
.reduce_mean
(loss2
))
loss_fn
= tf
.keras
.losses
.BinaryCrossentropy
(from_logits
=True)
loss3
= loss_fn
(y_true
=target_t
, y_pred
=input_t
)
print(tf
.reduce_mean
(loss3
))
總結
以上是生活随笔為你收集整理的pytorch 和 tensorflow2.0 方法替换的全部內容,希望文章能夠幫你解決所遇到的問題。
如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。