線性回歸預測的是一個連續值,邏輯回歸給出的“是”和“否”的答案一個二元分類的問題。
sigmoid函數是一個概率分布函數,給定某個輸入,它將輸出為一個概率值。
邏輯回歸損失函數
交叉熵損失函數
import os
os
.environ
['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow
as tf
import pandas
as pd
import numpy
as np
import matplotlib
.pyplot
as pltdata
= pd
.read_csv
("credit-a.csv",header
=None)
print(data
.head
())
print(data
.iloc
[:,-1].value_counts
())
import os
os
.environ
['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow
as tf
import pandas
as pd
import numpy
as np
import matplotlib
.pyplot
as pltdata
= pd
.read_csv
("credit-a.csv",header
=None)
print(data
.head
())
print(data
.iloc
[:,-1].value_counts
())
x
= data
.iloc
[:,:-1]
y
= data
.iloc
[:,-1].replace
(-1,0)
print(x
,y
)
import os
os
.environ
['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow
as tf
import pandas
as pd
import numpy
as np
import matplotlib
.pyplot
as pltdata
= pd
.read_csv
("credit-a.csv",header
=None)
print(data
.iloc
[:,-1].value_counts
())
x
= data
.iloc
[:,:-1]
y
= data
.iloc
[:,-1].replace
(-1,0)
module
= tf
.keras
.Sequential
()
module
.add
(tf
.keras
.layers
.Dense
(4,input_shape
=(15,),activation
="relu"))
module
.add
(tf
.keras
.layers
.Dense
(4,activation
="relu"))
module
.add
(tf
.keras
.layers
.Dense
(1,activation
="sigmoid"))
print(module
.summary
())
import os
os
.environ
['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow
as tf
import pandas
as pd
import numpy
as np
import matplotlib
.pyplot
as pltdata
= pd
.read_csv
("credit-a.csv",header
=None)
print(data
.iloc
[:,-1].value_counts
())
x
= data
.iloc
[:,:-1]
y
= data
.iloc
[:,-1].replace
(-1,0)
module
= tf
.keras
.Sequential
()
module
.add
(tf
.keras
.layers
.Dense
(4,input_shape
=(15,),activation
="relu"))
module
.add
(tf
.keras
.layers
.Dense
(4,activation
="relu"))
module
.add
(tf
.keras
.layers
.Dense
(1,activation
="sigmoid"))
module
.compile(optimizer
="adam",loss
="binary_crossentropy",metrics
=["acc"])
history
= module
.fit
(x
,y
,epochs
=100)
print(history
.history
.keys
())
plt
.plot
(history
.epoch
,history
.history
.get
("loss"))
plt
.show
()
plt
.plot
(history
.epoch
,history
.history
.get
("acc"))
plt
.show
()
總結
以上是生活随笔為你收集整理的深度学习-Tensorflow2.2-深度学习基础和tf.keras{1}-逻辑回归与交叉熵概述-05的全部內容,希望文章能夠幫你解決所遇到的問題。
如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。