深度学习总结:GAN,原理,算法描述,pytoch实现
生活随笔
收集整理的這篇文章主要介紹了
深度学习总结:GAN,原理,算法描述,pytoch实现
小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
文章目錄
- GAN的原理圖:
- GAN的原版算法描述:
- pytorch實現(xiàn)
- 構建generator和discriminator:
- 生成fake data:
- 生成real data:
- 定義訓練D的loss,定義訓練G的loss, 實際就是forward pass:
- 優(yōu)化過程, 實際就是backward pass:
- 為了實現(xiàn)fixedGtrainD,fixedDtrainG,我們設計優(yōu)化器更新指定區(qū)域的參數(shù):
- fixedG trainD:
- fixedDtrainG:
- 完整版訓練過程:
GAN的原理圖:
GAN的原版算法描述:
pytorch實現(xiàn)
構建generator和discriminator:
G = nn.Sequential( # Generatornn.Linear(N_IDEAS, 128), # random ideas (could from normal distribution)nn.ReLU(),nn.Linear(128, ART_COMPONENTS), # making a painting from these random ideas )D = nn.Sequential( # Discriminatornn.Linear(ART_COMPONENTS, 128), # receive art work either from the famous artist or a newbie like Gnn.ReLU(),nn.Linear(128, 1),nn.Sigmoid(), # tell the probability that the art work is made by artist )生成fake data:
G_ideas = torch.randn(BATCH_SIZE, N_IDEAS) # random ideasG_paintings = G(G_ideas) # fake painting from G (random ideas)生成real data:
def artist_works(): # painting from the famous artist (real target)a = np.random.uniform(1, 2, size=BATCH_SIZE)[:, np.newaxis]paintings = a * np.power(PAINT_POINTS, 2) + (a-1)paintings = torch.from_numpy(paintings).float()return paintingsartist_paintings = artist_works() # real painting from artist定義訓練D的loss,定義訓練G的loss, 實際就是forward pass:
這個loss就相當于把G和D連接起來了,形成通路了,這里實際上體現(xiàn)了pytorch動態(tài)圖的思想。
prob_artist0 = D(artist_paintings) # D try to increase this probprob_artist1 = D(G_paintings) # D try to reduce this probD_loss = - torch.mean(torch.log(prob_artist0) + torch.log(1. - prob_artist1))G_loss = torch.mean(torch.log(1. - prob_artist1))優(yōu)化過程, 實際就是backward pass:
為了實現(xiàn)fixedGtrainD,fixedDtrainG,我們設計優(yōu)化器更新指定區(qū)域的參數(shù):
opt_D = torch.optim.Adam(D.parameters(), lr=LR_D) opt_G = torch.optim.Adam(G.parameters(), lr=LR_G)fixedG trainD:
opt_D.zero_grad():需要先初始化opt_D,避免前面的數(shù)據(jù)影響當前更新。
D_loss.backward(retain_graph=True) :計算整個graph梯度,retain_graph=True,需要保持計算圖,啥意思?pytorch默認計算一次backward就釋放當前graph,釋放了就是你必須從頭開始走forward pass ,而這里我們需要重新走一遍原圖的D部分。
opt_D.step():根據(jù)梯度更新指定區(qū)域的參數(shù)
fixedDtrainG:
G_loss.backward():G_loss = torch.mean(torch.log(1. - prob_artist1)),prob_artist1 = D(G_paintings) 可以看出我們需要重新走一遍D(G不需要走),這個是在原來graph上操作的,這就是為什么需要retain graph的原因
opt_G.zero_grad()G_loss.backward()opt_G.step()完整版訓練過程:
def artist_works(): # painting from the famous artist (real target)a = np.random.uniform(1, 2, size=BATCH_SIZE)[:, np.newaxis]paintings = a * np.power(PAINT_POINTS, 2) + (a-1)paintings = torch.from_numpy(paintings).float()return paintingsG = nn.Sequential( # Generatornn.Linear(N_IDEAS, 128), # random ideas (could from normal distribution)nn.ReLU(),nn.Linear(128, ART_COMPONENTS), # making a painting from these random ideas )D = nn.Sequential( # Discriminatornn.Linear(ART_COMPONENTS, 128), # receive art work either from the famous artist or a newbie like Gnn.ReLU(),nn.Linear(128, 1),nn.Sigmoid(), # tell the probability that the art work is made by artist )opt_D = torch.optim.Adam(D.parameters(), lr=LR_D) opt_G = torch.optim.Adam(G.parameters(), lr=LR_G)plt.ion() # something about continuous plottingfor step in range(10000):artist_paintings = artist_works() # real painting from artistG_ideas = torch.randn(BATCH_SIZE, N_IDEAS) # random ideasG_paintings = G(G_ideas) # fake painting from G (random ideas)prob_artist0 = D(artist_paintings) # D try to increase this probprob_artist1 = D(G_paintings) # D try to reduce this probD_loss = - torch.mean(torch.log(prob_artist0) + torch.log(1. - prob_artist1))G_loss = torch.mean(torch.log(1. - prob_artist1))opt_D.zero_grad()D_loss.backward(retain_graph=True) # reusing computational graphopt_D.step()opt_G.zero_grad()G_loss.backward()opt_G.step()總結
以上是生活随笔為你收集整理的深度学习总结:GAN,原理,算法描述,pytoch实现的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 深度学习总结:DQN原理,算法及pyto
- 下一篇: 深度学习总结:GAN,3种方式实现fix