Keras —— 序贯模型和函数式模型
1、應(yīng)用序貫?zāi)P偷幕静襟E
2、創(chuàng)建
1、可以通過向Sequential模型傳遞一個layer的list來構(gòu)造該模型:
from keras.models import Sequential from keras.layers import Dense, Activationmodel = Sequential([ Dense(32, units=784), Activation('relu'), Dense(10), Activation('softmax'), ])2、也可以通過.add()方法一個個的將layer加入模型中:
model = Sequential() model.add(Dense(32, input_shape=(784,))) model.add(Activation('relu'))3、指定輸入數(shù)據(jù)的shape
模型需要知道輸入數(shù)據(jù)的shape,因此,Sequential的第一層需要接受一個關(guān)于輸入數(shù)據(jù)shape的參數(shù),后面的各個層則可以自動的推導(dǎo)出中間數(shù)據(jù)的shape,因此不需要為每個層都指定這個參數(shù)。有幾種方法來為第一層指定輸入數(shù)據(jù)的shape
1、傳遞一個input_shape的關(guān)鍵字參數(shù)給第一層,input_shape是一個tuple類型的數(shù)據(jù)
model = Sequential() model.add(Dense(64, input_shape=(20,), activation='relu'))2、有些2D層,如Dense,支持通過指定其輸入維度input_dim來隱含的指定輸入數(shù)據(jù)shape,是一個Int類型的數(shù)據(jù)。一些3D的時域?qū)又С滞ㄟ^參數(shù)input_dim和input_length來指定輸入shape。
model = Sequential() model.add(Dense(64, input_dim=20, activation='relu'))4、編譯
在訓(xùn)練模型之前,我們需要通過compile來對學(xué)習(xí)過程進(jìn)行配置。compile接收三個參數(shù):優(yōu)化器optimizer,損失函數(shù)loss,指標(biāo)列表metrics
model.compile(loss='binary_crossentropy',optimizer='rmsprop',metrics=['accuracy'])5、訓(xùn)練
訓(xùn)練模型一般使用fit函數(shù)
model.fit(x_train, y_train,epochs=20,batch_size=128)6、評估
根據(jù)驗(yàn)證集評估模型的好壞
score = model.evaluate(x_val, y_val, batch_size=128) print('val score:', score[0]) print('val accuracy:', score[1])7、預(yù)測
對已訓(xùn)練完成的模型,輸入特征值x會預(yù)測得到標(biāo)簽y
x=1 y=model.predict(x,verbose=0) print(y)8、示例
import numpy as np from keras.models import Sequential from keras.layers import Dense, Dropout# 準(zhǔn)備訓(xùn)練集和驗(yàn)證集 x_train = np.random.random((1000, 20)) y_train = np.random.randint(2, size=(1000, 1)) x_val = np.random.random((100, 20)) y_val = np.random.randint(2, size=(100, 1))model = Sequential() model.add(Dense(64, input_dim=20, activation='relu')) # 或 model.add(Dense(64, input_shape=(20,), activation='relu')) model.add(Dropout(0.5)) model.add(Dense(64, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid'))model.compile(loss='binary_crossentropy',optimizer='rmsprop',metrics=['accuracy']) model.fit(x_train, y_train,epochs=20,batch_size=128)score = model.evaluate(x_val, y_val, batch_size=128) print('val score:', score[0]) print('val accuracy:', score[1])x=1 y=model.predict(x,verbose=0) print(y)函數(shù)式模型
比序貫?zāi)P鸵獜?fù)雜,可以同時/分階段輸入變量,分階段輸出想要的模型
1、應(yīng)用函數(shù)式模型的基本步驟
2、創(chuàng)建
model=Model(inputs=, outputs=)3、指定輸入數(shù)據(jù)的shape
inputs = Input(shape=(20,))4、編譯,訓(xùn)練,評估,預(yù)測等步驟與序貫式模型相同,這里不再贅述
5、示例一
基于上文序貫式模型進(jìn)行改造
import numpy as np from keras.models import Model from keras.layers import Dense, Dropout# 準(zhǔn)備訓(xùn)練集和驗(yàn)證集 x_train = np.random.random((1000, 20)) y_train = np.random.randint(2, size=(1000, 1)) x_val = np.random.random((100, 20)) y_val = np.random.randint(2, size=(100, 1))inputs = Input(shape=(20,)) x=Dense(64,activation='relu')(inputs) x=Dropout(0.5)(x) x=Dense(64,activation='relu')(x) x=Dropout(0.5)(x) predictions=Dense(1, activation='sigmoid')(x)model=Model(inputs=inputs, outputs=predictions) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) model.fit(x_train, y_train,epochs=20,batch_size=128)score = model.evaluate(x_val, y_val, batch_size=128) print('val score:', score[0]) print('val accuracy:', score[1])x=1 y=model.predict(x,verbose=0) print(y)6、示例二
多輸入多輸出模型
from keras.layers import Input, Embedding, LSTM, Dense from keras.models import Modelmain_input = Input(shape=(100,), dtype='int32', name='main_input') x = Embedding(output_dim=512, input_dim=10000, input_length=100)(main_input) lstm_out = LSTM(32)(x) auxiliary_output = Dense(1, activation='sigmoid', name='aux_output')(lstm_out) auxiliary_input = Input(shape=(5,), name='aux_input') x = keras.layers.concatenate([lstm_out, auxiliary_input])# We stack a deep densely-connected network on top x = Dense(64, activation='relu')(x) x = Dense(64, activation='relu')(x) x = Dense(64, activation='relu')(x)# And finally we add the main logistic regression layer main_output = Dense(1, activation='sigmoid', name='main_output')(x)model = Model(inputs=[main_input, auxiliary_input], outputs=[main_output, auxiliary_output]) model.compile(optimizer='rmsprop',loss={'main_output': 'binary_crossentropy', 'aux_output': 'binary_crossentropy'},loss_weights={'main_output': 1., 'aux_output': 0.2})# And trained it via: model.fit({'main_input': headline_data, 'aux_input': additional_data},{'main_output': labels, 'aux_output': labels},epochs=50, batch_size=32)總結(jié)
以上是生活随笔為你收集整理的Keras —— 序贯模型和函数式模型的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: ADC前端放大器以及RC滤波器设计考虑
- 下一篇: 【CentOS】CentOS镜像文件各个