再战FGM!Tensorflow2.0 自定义模型训练实现NLP中的FGM对抗训练 代码实现
生活随笔
收集整理的這篇文章主要介紹了
再战FGM!Tensorflow2.0 自定义模型训练实现NLP中的FGM对抗训练 代码实现
小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
TF版本2.2及以上
def creat_FGM(epsilon=1.0):@tf.function def train_step(self, data):'''計算在embedding上的gradient計算擾動 在embedding上加上擾動重新計算loss和gradient刪除embedding上的擾動,并更新參數(shù)'''data = data_adapter.expand_1d(data)x, y, sample_weight = data_adapter.unpack_x_y_sample_weight(data)with tf.GradientTape() as tape:y_pred = model(x,training=True)loss = loss_func(y,y_pred)embedding = model.trainable_variables[0]embedding_gradients = tape.gradient(loss,[model.trainable_variables[0]])[0]embedding_gradients = tf.zeros_like(embedding) + embedding_gradientsdelta = 0.2 * embedding_gradients / (tf.math.sqrt(tf.reduce_sum(embedding_gradients**2)) + 1e-8) # 計算擾動model.trainable_variables[0].assign_add(delta)with tf.GradientTape() as tape2:y_pred = model(x,training=True)new_loss = loss_func(y,y_pred)gradients = tape2.gradient(new_loss,model.trainable_variables)model.trainable_variables[0].assign_sub(delta)optimizer.apply_gradients(zip(gradients,model.trainable_variables))train_loss.update_state(loss)return {m.name: m.result() for m in self.metrics}return train_step使用方法
TF2.2 及以上的方法比較簡單
model.compile(loss='sparse_categorical_crossentropy',optimizer=tf.keras.optimizers.Adam(0.001),metrics=['acc'],)#替換model.train_step 方法即可,并且刪除原有的 train_function方法 train_step = creat_FGM() model.train_step = functools.partial(train_step, model) model.train_function = Nonehistory = model.fit(X_train,y_train,epochs=5,validation_data=(X_test,y_test),verbose=1,batch_size=32)TF版本2.2以下,適用于2.0GPU版本
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001) loss_func = tf.losses.SparseCategoricalCrossentropy() train_loss = tf.metrics.Mean(name='train_loss')ds_train = tf.data.Dataset.from_tensor_slices((X_train,y_train)) \.shuffle(buffer_size = 1000).batch(32) \.prefetch(tf.data.experimental.AUTOTUNE).cache()@tf.function def train_step(model,x,y,loss_func,optimizer,train_loss):with tf.GradientTape() as tape:y_pred = model(x,training=True)loss = loss_func(y,y_pred)embedding = model.trainable_variables[0]embedding_gradients = tape.gradient(loss,[model.trainable_variables[0]])[0]embedding_gradients = tf.zeros_like(embedding) + embedding_gradientsdelta = 0.2 * embedding_gradients / (tf.math.sqrt(tf.reduce_sum(embedding_gradients**2)) + 1e-8) # 計算擾動model.trainable_variables[0].assign_add(delta)with tf.GradientTape() as tape2:y_pred = model(x,training=True)new_loss = loss_func(y,y_pred)gradients = tape2.gradient(new_loss,model.trainable_variables)model.trainable_variables[0].assign_sub(delta)optimizer.apply_gradients(zip(gradients,model.trainable_variables))train_loss.update_state(loss)@tf.function def printbar():ts = tf.timestamp()today_ts = ts%(24*60*60)hour = tf.cast(today_ts//3600+8,tf.int32)%tf.constant(24)minite = tf.cast((today_ts%3600)//60,tf.int32)second = tf.cast(tf.floor(today_ts%60),tf.int32)def timeformat(m):if tf.strings.length(tf.strings.format("{}",m))==1:return(tf.strings.format("0{}",m))else:return(tf.strings.format("{}",m))timestring = tf.strings.join([timeformat(hour),timeformat(minite),timeformat(second)],separator = ":")tf.print("=========="*8,end = "")tf.print(timestring)訓(xùn)練代碼
def train_model(model,ds_train,epochs):for epoch in tf.range(1,epochs+1):for x, y in ds_train:train_step(model,x,y,loss_func,optimizer,train_loss)logs = 'Epoch={},Loss:{}'if epoch%1 ==0:printbar()tf.print(tf.strings.format(logs,(epoch,train_loss.result())))tf.print("")train_loss.reset_states()train_model(model,ds_train,10)以上方法均在小模型上測試完成,由于本人的GPU顯存不足,導(dǎo)致無法給出一個BERTbase模型的效果分析,各位可以自己搬運(yùn)后嘗試一下。
對于FGM的介紹可以參考蘇神文章:
蘇劍林. (2020, Mar 01). 《對抗訓(xùn)練淺談:意義、方法和思考(附Keras實(shí)現(xiàn)) 》[Blog post]. Retrieved from https://spaces.ac.cn/archives/7234
總結(jié)
以上是生活随笔為你收集整理的再战FGM!Tensorflow2.0 自定义模型训练实现NLP中的FGM对抗训练 代码实现的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 统计学习方法第十一章作业:随机条件场—概
- 下一篇: 统计学习方法第十四章作业:聚类—层次聚类