多層感知器
計算輸入特征得加權和,然后使用一個函數激活(或傳遞函數)計算輸出。
單個神經元
多個神經元
單層神經元缺陷
多層感知器
多層感知器
激活函數
relu:曲線如下圖,假如過來的函數是x當x小于0的時候直接屏蔽,大于0的時候就原樣輸出
sigmoid激活:假如輸出的x值就會帶入下面公式進行計算
tanh激活:-1到1之間
leak relu激活
import os
os
.environ
['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow
as tf
import pandas
as pd
import numpy
as np
import matplotlib
.pyplot
as pltdata
= pd
.read_csv
("Advertising.csv")
print(data
.head
())
plt
.scatter
(data
.TV
,data
.sales
)
plt
.show
()
import os
os
.environ
['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow
as tf
import pandas
as pd
import numpy
as np
import matplotlib
.pyplot
as pltdata
= pd
.read_csv
("Advertising.csv")
print(data
.head
())
plt
.scatter
(data
.radio
,data
.sales
)
plt
.show
()
import os
os
.environ
['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow
as tf
import pandas
as pd
import numpy
as np
import matplotlib
.pyplot
as pltdata
= pd
.read_csv
("Advertising.csv")
print(data
.head
())
x
= data
.iloc
[:,1:-1]
y
= data
.iloc
[:,-1]
model
= tf
.keras
.Sequential
([tf
.keras
.layers
.Dense
(10,input_shape
=(3,),activation
="relu"),tf
.keras
.layers
.Dense
(1)])
print(model
.summary
())
import os
os
.environ
['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow
as tf
import pandas
as pd
import numpy
as np
import matplotlib
.pyplot
as pltdata
= pd
.read_csv
("Advertising.csv")
x
= data
.iloc
[:,1:-1]
y
= data
.iloc
[:,-1]
model
= tf
.keras
.Sequential
([tf
.keras
.layers
.Dense
(10,input_shape
=(3,),activation
="relu"),tf
.keras
.layers
.Dense
(1)])
print(model
.summary
())
model
.compile(optimizer
="adam",loss
="mse")
model
.fit
(x
,y
,epochs
=100)
test
= data
.iloc
[:10,1:-1]
print(model
.predict
(test
))
test
= data
.iloc
[:10,-1]
print(test
)
梯度下降
預測值
實際值
總結
以上是生活随笔為你收集整理的深度学习-Tensorflow2.2-深度学习基础和tf.keras{1}-多层感知器(神经网络)与激活函数概述-04的全部內容,希望文章能夠幫你解決所遇到的問題。
如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。