VAE(变分自编码器)学习笔记
生活随笔
收集整理的這篇文章主要介紹了
VAE(变分自编码器)学习笔记
小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
VAE學(xué)習(xí)筆記
普通的編碼器可以將圖像這類(lèi)信息編碼成為特征向量.
但通常這些特征向量不具有空間上的連續(xù)性.
VAE(變分自編碼器)可以將圖像信息編碼成為具有空間連續(xù)性的特征向量.
方法是向編碼器和解碼器中加入統(tǒng)計(jì)信息,即特征向量代表的的是一個(gè)高斯分布,強(qiáng)迫特征向量服從高斯分布.
編碼器是將圖片信息編碼成為一個(gè)高斯分布.
解碼器則是從特征空間中進(jìn)行采樣,再經(jīng)過(guò)全連接層,反卷積層,卷積層等恢復(fù)成一張與輸入圖片大小相等的圖片.
損失函數(shù)有兩個(gè)目標(biāo):即(1)擬合原始圖片以及(2)使得特征空間具有良好的結(jié)構(gòu)及降低過(guò)擬合.因此,我們的損失函數(shù)由兩部分構(gòu)成.其中第二部分需要使得編碼出的正態(tài)分布圍繞在標(biāo)準(zhǔn)正態(tài)分布周?chē)?
實(shí)現(xiàn)代碼
import keras from keras import layers from keras import backend as K from keras.models import Model import numpy as npimg_shape = (28,28,1) batch_size = 16 latent_dim = 2##### 圖片編碼器部分 input_img = keras.Input(shape = img_shape)x = layers.Conv2D(32,3,padding = 'same', activation = 'relu')(input_img) x = layers.Conv2D(64,3,padding = 'same', activation = 'relu',strides = (2,2))(x) x = layers.Conv2D(64,3,padding = 'same', activation = 'relu')(x) x = layers.Conv2D(64,3,padding = 'same', activation = 'relu')(x)shape_before_flatting = K.int_shape(x)x = layers.Flatten()(x) x = layers.Dense(32,activation='relu')(x)#輸入圖像最終被編碼為如下兩個(gè)參數(shù) z_mean = layers.Dense(latent_dim)(x) z_log_var = layers.Dense(latent_dim)(x) # 需要注意的是 z_log_var = 2log(sigma) ##### 編碼器部分結(jié)束##### 采樣函數(shù),用于在給定的正態(tài)分布中進(jìn)行采樣,這也就是編碼器加入統(tǒng)計(jì)信息的地方. def sampling(args):z_mean, z_log_var = argsepsilon = K.random_normal(shape = (K.shape(z_mean)[0],latent_dim),mean = 0.,stddev = 1.)return z_mean + K.exp(0.5*z_log_var) * epsilon##### VAE解碼器部分 decoder_input = layers.Input(K.int_shape(z)[1:]) x = layers.Dense(np.prod(shape_before_flatting[1:]),activation='relu')(decoder_input) x = layers.Reshape(shape_before_flatting[1:])(x) x = layers.Conv2DTranspose(32,3,padding='same',activation = 'relu',strides = (2,2))(x) x = layers.Conv2D(1,3,padding='same',activation='sigmoid')(x) ##### 到這里x被恢復(fù)成為一張圖片##### 下面兩句話(huà)將編碼器和解碼器通過(guò)上采樣函數(shù)連接到了一起 decoder = Model(decoder_input,x) z = layers.Lambda(sampling)([z_mean,z_log_var]) z_decoder = decoder(z)###### 自定義損失函數(shù)層 def vae_loss(y_true,y_pred,e = 0.1):x = K.flatten(y_true)z_decoded = K.flatten(y_pred)xent_loss = keras.metrics.binary_crossentropy(x,z_decoded)kl_loss = -5e-4 * K.mean(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var),axis = -1)return K.mean(xent_loss + kl_loss)from keras.datasets import mnistvae = Model(input_img,z_decoder) vae.compile(optimizer = 'rmsprop',loss = vae_loss) vae.summary()##### 訓(xùn)練模型 from keras.datasets import mnist (x_train,_),(x_test,y_test) = mnist.load_data() x_train = x_train.astype('float32') / 255. x_train = x_train.reshape(x_train.shape + (1,)) x_test = x_test.astype('float32') / 255. x_test = x_test.reshape(x_test.shape + (1,))vae.fit(x = x_train,y = x_train,shuffle = True,epochs = 10,batch_size = batch_size,validation_data = (x_test,x_test))##### 在特征空間進(jìn)行連續(xù)采樣,觀察輸出圖片 import matplotlib.pyplot as plt from scipy.stats import norm n = 24 digit_size = 28 figure = np.zeros((digit_size*n,digit_size*n)) grid_x = norm.ppf(np.linspace(0.02,0.98,n)) grid_y = norm.ppf(np.linspace(0.02,0.98,n))print(batch_size)for i,yi in enumerate(grid_x):for j,xi in enumerate(grid_y):z_sample = np.array([[xi,yi]])z_sample = np.tile(z_sample,1).reshape(1, 2)x_decoded = decoder.predict(z_sample, batch_size = 1)digit = x_decoded[0].reshape(digit_size,digit_size)figure[i*digit_size:(i+1)*digit_size,j*digit_size:(j+1)*digit_size] = digitplt.figure(figsize = (15,15)) plt.imshow(figure,cmap = 'Greys_r') plt.show()輸出結(jié)果
總結(jié)
以上是生活随笔為你收集整理的VAE(变分自编码器)学习笔记的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 何炅撒贝宁主持的节目都有什么 何炅撒贝宁
- 下一篇: 古巴比伦王国建立时间 古巴比伦王国建立时