【Pytorch神经网络实战案例】12 利用注意力机制的神经网络实现对FashionMNIST数据集图片的分类
生活随笔
收集整理的這篇文章主要介紹了
【Pytorch神经网络实战案例】12 利用注意力机制的神经网络实现对FashionMNIST数据集图片的分类
小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
1、掩碼模式:是相對于變長的循環(huán)序列而言的,如果輸入的樣本序列長度不同,那么會(huì)先對其進(jìn)行對齊處理(對短序列補(bǔ)0,對長序列截?cái)?#xff09;,再輸入模型。這樣,模型中的部分樣本中就會(huì)有大量的零值。為了提升運(yùn)算性能,需要以掩碼的方式將不需要的零值去掉,并保留非零值進(jìn)行計(jì)算,這就是掩碼的作用
2、均值模式:正常模式對每個(gè)維度的所有序列計(jì)算注意力分?jǐn)?shù),而均值模式對每個(gè)維度注意力分?jǐn)?shù)計(jì)算平均值。均值模式會(huì)平滑處理同一序列不同維度之間的差異,認(rèn)為所有維度都是平等的,將注意力用在序列之間。這種方式更能體現(xiàn)出序列的重要性。
?代碼?Attention_cclassification.py
import torchvision import torchvision.transforms as tranforms import pylab import torch from matplotlib import pyplot as plt import numpy as np import os os.environ['KMP_DUPLICATE_LIB_OK'] = 'True' # 可能是由于是MacOS系統(tǒng)的原因data_dir = './fashion_mnist' tranform = tranforms.Compose([tranforms.ToTensor()]) train_dataset = torchvision.datasets.FashionMNIST(root=data_dir,train=True,transform=tranform,download=True) print("訓(xùn)練數(shù)據(jù)集條數(shù)",len(train_dataset)) val_dataset = torchvision.datasets.FashionMNIST(root=data_dir, train=False, transform=tranform) print("測試數(shù)據(jù)集條數(shù)",len(val_dataset)) im = train_dataset[0][0] im = im.reshape(-1,28) pylab.imshow(im) pylab.show() print("該圖片的標(biāo)簽為:",train_dataset[0][1])## 數(shù)據(jù)集的制造 batch_size = 10 train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader(val_dataset, batch_size=batch_size, shuffle=False)def imshow(img):print("圖片形狀:",np.shape(img))npimg = img.numpy()plt.axis('off')plt.imshow(np.transpose(npimg,(1,2,0)))classes = ('T-shirt', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle_Boot') sample = iter(train_loader) images,labels = sample.next() print("樣本形狀:",np.shape(images)) print("樣本標(biāo)簽",labels) imshow(torchvision.utils.make_grid(images,nrow=batch_size)) print(','.join('%5s' % classes[labels[j]] for j in range(len(images))))class myLSTMNet(torch.nn.Module): #定義myLSTMNet模型類,該模型包括 2個(gè)RNN層和1個(gè)全連接層def __init__(self,in_dim, hidden_dim, n_layer, n_class):super(myLSTMNet, self).__init__()# 定義循環(huán)神經(jīng)網(wǎng)絡(luò)層self.lstm = torch.nn.LSTM(in_dim, hidden_dim, n_layer, batch_first=True)self.Linear = torch.nn.Linear(hidden_dim * 28, n_class) # 定義全連接層self.attention = AttentionSeq(hidden_dim, hard=0.03) # 定義注意力層,使用硬模式的注意力機(jī)制def forward(self, t): # 搭建正向結(jié)構(gòu)t, _ = self.lstm(t) # 使用LSTM對象進(jìn)行RNN數(shù)據(jù)處理t = self.attention(t) # 對循環(huán)神經(jīng)網(wǎng)絡(luò)結(jié)果進(jìn)行注意力機(jī)制的處理,將處理后的結(jié)果變形為二維數(shù)據(jù),傳入全連接輸出層。1t = t.reshape(t.shape[0], -1) # 對循環(huán)神經(jīng)網(wǎng)絡(luò)結(jié)果進(jìn)行注意力機(jī)制的處理,將處理后的結(jié)果變形為二維數(shù)據(jù),傳入全連接輸出層。2out = self.Linear(t) # 進(jìn)行全連接處理return outclass AttentionSeq(torch.nn.Module):def __init__(self, hidden_dim, hard=0.0): # 初始化super(AttentionSeq, self).__init__()self.hidden_dim = hidden_dimself.dense = torch.nn.Linear(hidden_dim, hidden_dim)self.hard = harddef forward(self, features, mean=False): # 類的處理方法# [batch,seq,dim]batch_size, time_step, hidden_dim = features.size()weight = torch.nn.Tanh()(self.dense(features)) # 全連接計(jì)算# 計(jì)算掩碼,mask給負(fù)無窮使得權(quán)重為0mask_idx = torch.sign(torch.abs(features).sum(dim=-1))# mask_idx = mask_idx.unsqueeze(-1).expand(batch_size, time_step, hidden_dim)mask_idx = mask_idx.unsqueeze(-1).repeat(1, 1, hidden_dim)# 將掩碼作用在注意力結(jié)果上# torch.where函數(shù)的意思是按照第一參數(shù)的條件對每個(gè)元素進(jìn)行檢查,如果滿足,那么使用第二個(gè)參數(shù)里對應(yīng)元素的值進(jìn)行填充,如果不滿足,那么使用第三個(gè)參數(shù)里對應(yīng)元素的值進(jìn)行填充。# torch.ful_likeO函數(shù)是按照張量的形狀進(jìn)行指定值的填充,其第一個(gè)參數(shù)是參考形狀的張量,第二個(gè)參數(shù)是填充值。weight = torch.where(mask_idx == 1, weight,torch.full_like(mask_idx, (-2 ** 32 + 1))) # 利用掩碼對注意力結(jié)果補(bǔ)0序列填充一個(gè)極小數(shù),會(huì)在Softmax中被忽略為0weight = weight.transpose(2, 1)# 必須對注意力結(jié)果補(bǔ)0序列填充一個(gè)極小數(shù),千萬不能填充0,因?yàn)樽⒁饬Y(jié)果是經(jīng)過激活函數(shù)tanh()計(jì)算出來的,其值域是 - 1~1, 在這個(gè)區(qū)間內(nèi),零值是一個(gè)有效值。如果填充0,那么會(huì)對后面的Softmax結(jié)果產(chǎn)生影響。填充的值只有遠(yuǎn)離這個(gè)有效區(qū)間才可以保證被Softmax的結(jié)果忽略。weight = torch.nn.Softmax(dim=2)(weight) # 計(jì)算注意力分?jǐn)?shù)if self.hard != 0: # hard modeweight = torch.where(weight > self.hard, weight, torch.full_like(weight, 0))if mean: # 支持注意力分?jǐn)?shù)平均值模式weight = weight.mean(dim=1)weight = weight.unsqueeze(1)weight = weight.repeat(1, hidden_dim, 1)weight = weight.transpose(2, 1)features_attention = weight * features # 將注意力分?jǐn)?shù)作用于特征向量上return features_attention # 返回結(jié)果#實(shí)例化模型對象 network = myLSTMNet(28, 128, 2, 10) # 圖片大小是28x28,28:輸入數(shù)據(jù)的序列長度為28。128:每層放置128個(gè)LSTM Cell。2:構(gòu)建兩層由LSTM Cell所組成的網(wǎng)絡(luò)。10:最終結(jié)果分為10類。 #指定設(shè)備 device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print(device) network.to(device) print(network) #打印網(wǎng)絡(luò)criterion = torch.nn.CrossEntropyLoss() # 實(shí)例化損失函數(shù)類 optimizer = torch.optim.Adam(network.parameters(), lr=0.01)for epoch in range(2): # 數(shù)據(jù)集迭代2次running_loss = 0.0for i, data in enumerate(train_loader, 0): # 循環(huán)取出批次數(shù)據(jù)inputs, labels = datainputs = inputs.squeeze(1) # 由于輸入數(shù)據(jù)是序列形式,不再是圖片,因此將通道設(shè)為1inputs, labels = inputs.to(device), labels.to(device) # 指定設(shè)備optimizer.zero_grad() # 清空之前的梯度outputs = network(inputs)loss = criterion(outputs, labels) # 計(jì)算損失loss.backward() #反向傳播optimizer.step() #更新參數(shù)running_loss += loss.item()if i % 1000 == 999:print('[%d, %5d] loss: %.3f' %(epoch + 1, i + 1, running_loss / 2000))running_loss = 0.0 print('Finished Training')#使用模型 dataiter = iter(test_loader) images, labels = dataiter.next() inputs, labels = images.to(device), labels.to(device)imshow(torchvision.utils.make_grid(images,nrow=batch_size)) print('真實(shí)標(biāo)簽: ', ' '.join('%5s' % classes[labels[j]] for j in range(len(images)))) inputs = inputs.squeeze(1) outputs = network(inputs) _, predicted = torch.max(outputs, 1)print('預(yù)測結(jié)果: ', ' '.join('%5s' % classes[predicted[j]]for j in range(len(images))))#測試模型 class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) with torch.no_grad():for data in test_loader:images, labels = dataimages = images.squeeze(1)inputs, labels = images.to(device), labels.to(device)outputs = network(inputs)_, predicted = torch.max(outputs, 1)predicted = predicted.to(device)c = (predicted == labels).squeeze()for i in range(10):label = labels[i]class_correct[label] += c[i].item()class_total[label] += 1sumacc = 0 for i in range(10):Accuracy = 100 * class_correct[i] / class_total[i]print('Accuracy of %5s : %2d %%' % (classes[i], Accuracy ))sumacc =sumacc+Accuracy print('Accuracy of all : %2d %%' % ( sumacc/10. )) 創(chuàng)作挑戰(zhàn)賽新人創(chuàng)作獎(jiǎng)勵(lì)來咯,堅(jiān)持創(chuàng)作打卡瓜分現(xiàn)金大獎(jiǎng)總結(jié)
以上是生活随笔為你收集整理的【Pytorch神经网络实战案例】12 利用注意力机制的神经网络实现对FashionMNIST数据集图片的分类的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 智慧交通day02-车流量检测实现06:
- 下一篇: matlab画线不同颜色_怎样画线框图才