tf 从RNN到BERT
生活随笔
收集整理的這篇文章主要介紹了
tf 从RNN到BERT
小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
數(shù)據(jù)初始化
import tensorflow as tf from tensorflow import keras from tensorflow.keras.layers import *((x_train, y_train), (x_test, y_test)) = keras.datasets.mnist.load_data()x_train = x_train.reshape(60000, -1) y_train = keras.utils.np_utils.to_categorical(y_train)SimpleRNN
model1 = keras.Sequential() model1.add(Embedding(input_dim=256, output_dim=5)) model1.add(SimpleRNN(units=2)) model1.add(Dense(10))model1.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"]) model1.fit(x_train, y_train, batch_size=10)GRU
model2 = keras.Sequential() model2.add(Embedding(input_dim=256, output_dim=5)) model2.add(GRU(units=2)) model2.add(Dense(10))model2.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"]) model2.fit(x_train, y_train, batch_size=10)LSTM
model3 = keras.Sequential() model3.add(Embedding(input_dim=256, output_dim=5)) model3.add(LSTM(units=2)) model3.add(Dense(10))model3.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"]) model3.fit(x_train, y_train, batch_size=10)encoder-decoder
model3 = keras.Sequential() model3.add(Embedding(input_dim=256, output_dim=5)) model3.add(LSTM(units=2)) model3.add(Dense(10))model3.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"]) model3.fit(x_train, y_train, batch_size=10)總結(jié)
以上是生活随笔為你收集整理的tf 从RNN到BERT的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: tf initializer
- 下一篇: tf keras SimpleRNN源码