脑电信号特征提取算法c语言_应用深度学习EEGNet来处理脑电信号
文章來源于"腦機(jī)接口社區(qū)"
應(yīng)用深度學(xué)習(xí)EEGNet來處理腦電信號(hào)?mp.weixin.qq.com本篇文章內(nèi)容主要包括:
EEGNet論文簡(jiǎn)介
腦機(jī)接口(BCI)使用神經(jīng)活動(dòng)作為控制信號(hào),實(shí)現(xiàn)與計(jì)算機(jī)的直接通信。這種神經(jīng)信號(hào)通常是從各種研究透徹的腦電圖(EEG)信號(hào)中挑選出來的。卷積神經(jīng)網(wǎng)絡(luò)(CNN)主要用來自動(dòng)特征提取和分類,其在計(jì)算機(jī)視覺和語音識(shí)別領(lǐng)域中的使用已經(jīng)很廣泛。CNN已成功應(yīng)用于基于EEG的BCI;但是,CNN主要應(yīng)用于單個(gè)BCI范式,在其他范式中的使用比較少,論文作者提出是否可以設(shè)計(jì)一個(gè)CNN架構(gòu)來準(zhǔn)確分類來自不同BCI范式的EEG信號(hào),同時(shí)盡可能地緊湊(定義為模型中的參數(shù)數(shù)量)。該論文介紹了EEGNet,這是一種用于基于EEG的BCI的緊湊型卷積神經(jīng)網(wǎng)絡(luò)。論文介紹了使用深度和可分離卷積來構(gòu)建特定于EEG的模型,該模型封裝了腦機(jī)接口中常見的EEG特征提取概念。論文通過四種BCI范式(P300視覺誘發(fā)電位、錯(cuò)誤相關(guān)負(fù)性反應(yīng)(ERN)、運(yùn)動(dòng)相關(guān)皮層電位(MRCP)和感覺運(yùn)動(dòng)節(jié)律(SMR)),將EEGNet在主體內(nèi)和跨主體分類方面與目前最先進(jìn)的方法進(jìn)行了比較。結(jié)果顯示,在訓(xùn)練數(shù)據(jù)有限的情況下,EEGNet比參考算法具有更強(qiáng)的泛化能力和更高的性能。同時(shí)論文也證明了EEGNet可以有效地推廣到ERP和基于振蕩的BCI。
網(wǎng)絡(luò)結(jié)構(gòu)圖如下:
實(shí)驗(yàn)結(jié)果如下圖,P300數(shù)據(jù)集的所有CNN模型之間的差異非常小,但是MRCP數(shù)據(jù)集卻存在顯著的差異,兩個(gè)EEGNet模型的性能都優(yōu)于所有其他模型。對(duì)于ERN數(shù)據(jù)集來說,兩個(gè)EEGNet模型的性能都優(yōu)于其他所有模型(p < 0.05)。
如下圖每個(gè)模型的P300,ERN和MRCP數(shù)據(jù)集的分類性能平均為30倍。對(duì)于P300和MRCP數(shù)據(jù)集,DeepConvNet和EEGNet模型之間的差異很小,兩個(gè)模型的性能均優(yōu)于ShallowConvNet。對(duì)于ERN數(shù)據(jù)集,參考算法(xDAWN + RG)明顯優(yōu)于所有其他模型。
下圖是對(duì)EEGNet-4,1模型配置獲得的特征進(jìn)行可視化。
(A)每個(gè)空間過濾器的空間拓?fù)洹?/p>
(B)每個(gè)濾波器的目標(biāo)試驗(yàn)和非目標(biāo)試驗(yàn)之間的平均小波時(shí)頻差。
下圖中第一排是使用DeepLIFT針對(duì)MRCP數(shù)據(jù)集的三個(gè)不同測(cè)試試驗(yàn),對(duì)使用cross-subject訓(xùn)練的EEGNet-8,2模型進(jìn)行的單次試驗(yàn)?zāi)X電特征相關(guān)性:
(A)高可信度,正確預(yù)測(cè)左手運(yùn)動(dòng);
(B)高可信度,正確預(yù)測(cè)右手運(yùn)動(dòng);
(C)低可信度,錯(cuò)誤預(yù)測(cè)左手運(yùn)動(dòng)。
標(biāo)題包括真實(shí)的類別標(biāo)簽和該標(biāo)簽的預(yù)測(cè)概率。
第二排是在兩個(gè)時(shí)間點(diǎn)的相關(guān)性空間分布圖:按鈕按下后大約50毫秒和150毫秒。與預(yù)期的一樣,高可信度試驗(yàn)顯示出分別對(duì)應(yīng)左(A)和右(B)按鈕對(duì)應(yīng)的對(duì)側(cè)運(yùn)動(dòng)皮層的正確相關(guān)性。對(duì)于低置信度的試驗(yàn),可以看到相關(guān)性更加混雜且分布廣泛,而運(yùn)動(dòng)皮質(zhì)沒有明確的空間定位。
EEGNet網(wǎng)絡(luò)實(shí)現(xiàn)
作者提供的代碼用的是舊版本的Pytorch,所以有一些錯(cuò)誤。Rose小哥基于作者提供的代碼在Pytorch 1.3.1(only cpu)版本下修改,經(jīng)測(cè)試,在Rose小哥環(huán)境下可以運(yùn)行[不排除在其他環(huán)境可能會(huì)存在不兼容的問題]
# 導(dǎo)入工具包 import numpy as np from sklearn.metrics import roc_auc_score, precision_score, recall_score, accuracy_score import torch import torch.nn as nn import torch.optim as optim from torch.autograd import Variable import torch.nn.functional as F import torch.optim as optimEEGNet網(wǎng)絡(luò)模型參數(shù)如下:
定義網(wǎng)絡(luò)模型:
class EEGNet(nn.Module):def __init__(self):super(EEGNet, self).__init__()self.T = 120# Layer 1self.conv1 = nn.Conv2d(1, 16, (1, 64), padding = 0)self.batchnorm1 = nn.BatchNorm2d(16, False)# Layer 2self.padding1 = nn.ZeroPad2d((16, 17, 0, 1))self.conv2 = nn.Conv2d(1, 4, (2, 32))self.batchnorm2 = nn.BatchNorm2d(4, False)self.pooling2 = nn.MaxPool2d(2, 4)# Layer 3self.padding2 = nn.ZeroPad2d((2, 1, 4, 3))self.conv3 = nn.Conv2d(4, 4, (8, 4))self.batchnorm3 = nn.BatchNorm2d(4, False)self.pooling3 = nn.MaxPool2d((2, 4))# 全連接層# 此維度將取決于數(shù)據(jù)中每個(gè)樣本的時(shí)間戳數(shù)。# I have 120 timepoints. self.fc1 = nn.Linear(4*2*7, 1)def forward(self, x):# Layer 1x = F.elu(self.conv1(x))x = self.batchnorm1(x)x = F.dropout(x, 0.25)x = x.permute(0, 3, 1, 2)# Layer 2x = self.padding1(x)x = F.elu(self.conv2(x))x = self.batchnorm2(x)x = F.dropout(x, 0.25)x = self.pooling2(x)# Layer 3x = self.padding2(x)x = F.elu(self.conv3(x))x = self.batchnorm3(x)x = F.dropout(x, 0.25)x = self.pooling3(x)# 全連接層x = x.view(-1, 4*2*7)x = F.sigmoid(self.fc1(x))return x定義評(píng)估指標(biāo):
acc:準(zhǔn)確率
auc:AUC 即 ROC 曲線對(duì)應(yīng)的面積
recall:召回率
precision:精確率
fmeasure:F值
def evaluate(model, X, Y, params = ["acc"]):results = []batch_size = 100predicted = []for i in range(len(X)//batch_size):s = i*batch_sizee = i*batch_size+batch_sizeinputs = Variable(torch.from_numpy(X[s:e]))pred = model(inputs)predicted.append(pred.data.cpu().numpy())inputs = Variable(torch.from_numpy(X))predicted = model(inputs)predicted = predicted.data.cpu().numpy()"""設(shè)置評(píng)估指標(biāo):acc:準(zhǔn)確率auc:AUC 即 ROC 曲線對(duì)應(yīng)的面積recall:召回率precision:精確率fmeasure:F值"""for param in params:if param == 'acc':results.append(accuracy_score(Y, np.round(predicted)))if param == "auc":results.append(roc_auc_score(Y, predicted))if param == "recall":results.append(recall_score(Y, np.round(predicted)))if param == "precision":results.append(precision_score(Y, np.round(predicted)))if param == "fmeasure":precision = precision_score(Y, np.round(predicted))recall = recall_score(Y, np.round(predicted))results.append(2*precision*recall/ (precision+recall))return results構(gòu)建網(wǎng)絡(luò)EEGNet,并設(shè)置二分類交叉熵和Adam優(yōu)化器
# 定義網(wǎng)絡(luò) net = EEGNet() # 定義二分類交叉熵 (Binary Cross Entropy) criterion = nn.BCELoss() # 定義Adam優(yōu)化器 optimizer = optim.Adam(net.parameters())創(chuàng)建數(shù)據(jù)集
""" 生成訓(xùn)練數(shù)據(jù)集,數(shù)據(jù)集有100個(gè)樣本 訓(xùn)練數(shù)據(jù)X_train:為[0,1)之間的隨機(jī)數(shù); 標(biāo)簽數(shù)據(jù)y_train:為0或1 """ X_train = np.random.rand(100, 1, 120, 64).astype('float32') y_train = np.round(np.random.rand(100).astype('float32')) """ 生成驗(yàn)證數(shù)據(jù)集,數(shù)據(jù)集有100個(gè)樣本 驗(yàn)證數(shù)據(jù)X_val:為[0,1)之間的隨機(jī)數(shù); 標(biāo)簽數(shù)據(jù)y_val:為0或1 """ X_val = np.random.rand(100, 1, 120, 64).astype('float32') y_val = np.round(np.random.rand(100).astype('float32')) """ 生成測(cè)試數(shù)據(jù)集,數(shù)據(jù)集有100個(gè)樣本 測(cè)試數(shù)據(jù)X_test:為[0,1)之間的隨機(jī)數(shù); 標(biāo)簽數(shù)據(jù)y_test:為0或1 """ X_test = np.random.rand(100, 1, 120, 64).astype('float32') y_test = np.round(np.random.rand(100).astype('float32'))訓(xùn)練并驗(yàn)證
batch_size = 32 # 訓(xùn)練 循環(huán) for epoch in range(10): print("nEpoch ", epoch)running_loss = 0.0for i in range(len(X_train)//batch_size-1):s = i*batch_sizee = i*batch_size+batch_sizeinputs = torch.from_numpy(X_train[s:e])labels = torch.FloatTensor(np.array([y_train[s:e]]).T*1.0)# wrap them in Variableinputs, labels = Variable(inputs), Variable(labels)# zero the parameter gradientsoptimizer.zero_grad()# forward + backward + optimizeoutputs = net(inputs)loss = criterion(outputs, labels)loss.backward()optimizer.step()running_loss += loss.item()# 驗(yàn)證params = ["acc", "auc", "fmeasure"]print(params)print("Training Loss ", running_loss)print("Train - ", evaluate(net, X_train, y_train, params))print("Validation - ", evaluate(net, X_val, y_val, params))print("Test - ", evaluate(net, X_test, y_test, params))Epoch 0
['acc', 'auc', 'fmeasure']
Training Loss 1.6107637286186218
Train - [0.52, 0.5280448717948718, 0.6470588235294118]
Validation - [0.55, 0.450328407224959, 0.693877551020408]
Test - [0.54, 0.578926282051282, 0.6617647058823529]
Epoch 1
['acc', 'auc', 'fmeasure']
Training Loss 1.5536684393882751
Train - [0.45, 0.41145833333333337, 0.5454545454545454]
Validation - [0.55, 0.4823481116584565, 0.6564885496183207]
Test - [0.65, 0.6530448717948717, 0.7107438016528926]
Epoch 2
['acc', 'auc', 'fmeasure']
Training Loss 1.5197088718414307
Train - [0.49, 0.5524839743589743, 0.5565217391304348]
Validation - [0.53, 0.5870279146141215, 0.5436893203883495]
Test - [0.57, 0.5428685897435898, 0.5567010309278351]
Epoch 3
['acc', 'auc', 'fmeasure']
Training Loss 1.4534167051315308
Train - [0.53, 0.5228365384615385, 0.4597701149425287]
Validation - [0.5, 0.48152709359605916, 0.46808510638297873]
Test - [0.61, 0.6502403846153847, 0.5517241379310345]
Epoch 4
['acc', 'auc', 'fmeasure']
Training Loss 1.3821702003479004
Train - [0.46, 0.4651442307692308, 0.3076923076923077]
Validation - [0.47, 0.5977011494252874, 0.29333333333333333]
Test - [0.52, 0.5268429487179488, 0.35135135135135137]
Epoch 5
['acc', 'auc', 'fmeasure']
Training Loss 1.440490186214447
Train - [0.56, 0.516025641025641, 0.35294117647058826]
Validation - [0.36, 0.3801313628899836, 0.2]
Test - [0.53, 0.6113782051282052, 0.27692307692307694]
Epoch 6
['acc', 'auc', 'fmeasure']
Training Loss 1.4722238183021545
Train - [0.47, 0.4194711538461539, 0.13114754098360656]
Validation - [0.46, 0.5648604269293925, 0.2285714285714286]
Test - [0.5, 0.5348557692307693, 0.10714285714285714]
Epoch 7
['acc', 'auc', 'fmeasure']
Training Loss 1.3460421562194824
Train - [0.51, 0.44871794871794873, 0.1694915254237288]
Validation - [0.44, 0.4490968801313629, 0.2]
Test - [0.53, 0.4803685897435898, 0.14545454545454545]
Epoch 8
['acc', 'auc', 'fmeasure']
Training Loss 1.3336675763130188
Train - [0.54, 0.4130608974358974, 0.20689655172413793]
Validation - [0.39, 0.40394088669950734, 0.14084507042253522]
Test - [0.51, 0.5400641025641025, 0.19672131147540983]
Epoch 9
['acc', 'auc', 'fmeasure']
Training Loss 1.438510239124298
Train - [0.53, 0.5392628205128205, 0.22950819672131148]
Validation - [0.42, 0.4848111658456486, 0.09375]
Test - [0.56, 0.5420673076923076, 0.2413793103448276]
參考
應(yīng)用深度學(xué)習(xí)EEGNet來處理腦電信號(hào)
總結(jié)
以上是生活随笔為你收集整理的脑电信号特征提取算法c语言_应用深度学习EEGNet来处理脑电信号的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 10 windows 启动虚拟机报错_W
- 下一篇: 饿了么和哈罗单车将合作 对双方都有很多好