【Python-ML】抽取最优化分类的特征子空间的LDA方法
生活随笔
收集整理的這篇文章主要介紹了
【Python-ML】抽取最优化分类的特征子空间的LDA方法
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
# -*- coding: utf-8 -*-
'''
Created on 2018年1月18日
@author: Jason.F
@summary: 特征抽取-LDA方法,監督,發現最優化分類的特征子空間,基于特征呈正態分布和特征間相互獨立
'''
import pandas as pd
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
import matplotlib.pyplot as plt
#第一步:導入數據,對原始d維數據集做標準化處理
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data',header=None)
df_wine.columns=['Class label','Alcohol','Malic acid','Ash','Alcalinity of ash','Magnesium','Total phenols','Flavanoids','Nonflavanoid phenols','Proanthocyanins','Color intensity','Hue','OD280/OD315 of diluted wines','Proline']
print ('class labels:',np.unique(df_wine['Class label']))
#分割訓練集合測試集
X,y=df_wine.iloc[:,1:].values,df_wine.iloc[:,0].values
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.3,random_state=0)
#特征值縮放-標準化
stdsc=StandardScaler()
X_train_std=stdsc.fit_transform(X_train)
X_test_std=stdsc.fit_transform(X_test)
#第二步:對于每一類別計算d維的均值向量
np.set_printoptions(precision=4)
mean_vecs=[]
for label in range(1,4):mean_vecs.append(np.mean(X_train_std[y_train==label],axis=0))print ('MV %s: %s \n' %(label,mean_vecs[label-1]))
#第三步:構造類間的散布矩陣和類內的散布矩陣
d=13 #特征數量
#計算類內散布矩陣
#觀察訓練集的類別樣本是否均勻,計算散布矩陣的前提是訓練集的類標是均勻分布的
print ('class label distribution:%s' %np.bincount(y_train)[1:])
S_W=np.zeros((d,d))#初始化類內散布矩陣
for label,mv in zip(range(1,4),mean_vecs):#class_scatter=np.zeros((d,d))#for row in X[y==label]:# row,mv =row.reshape(d,1),mv.reshape(d,1)# class_scatter+= (row-mv).dot((row-mv).T)#類標分布不均勻,對特征值做標準化,用標準化后的特征值計算散布矩陣class_scatter=np.cov(X_train_std[y_train==label].T)#協方差矩陣是歸一化的散布矩陣S_W += class_scatter
print ('Within-class scatter matrix: %sx%s' %(S_W.shape[0],S_W.shape[1]))
#計算類間散布矩陣
mean_overall = np.mean(X_train_std,axis=0)
S_B=np.zeros((d,d))#初始化類間散布矩陣
for i ,mean_vec in enumerate(mean_vecs):n=X_train_std[y_train==i+1,:].shape[0]mean_vec=mean_vec.reshape(d,1)mean_overall=mean_overall.reshape(d,1)S_B+=n*(mean_vec-mean_overall).dot((mean_vec-mean_overall).T)
print ('Between-class scatter matrix: %sx%s' %(S_B.shape[0],S_B.shape[1]))
#第四部:計算類間類內乘積的矩陣的特征值和特征向量
eigen_vals,eigen_vecs=np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
eigen_pairs=[(np.abs(eigen_vals[i]), eigen_vecs[:, i]) for i in range(len(eigen_vals))]
eigen_pairs=sorted(eigen_pairs,key=lambda k:k[0],reverse=True)
print ('Eigenvalues in decreasing order:\n')
for eigen_val in eigen_pairs:print (eigen_val[0])
#可視化特征判定類別區分能力的圖,按照特征值排序繪制出特征對線性判別信息保持程度
tot=sum(eigen_vals.real)
discr=[(i/tot) for i in sorted(eigen_vals.real,reverse=True)]
cum_discr=np.cumsum(discr)
plt.bar(range(1,14),discr,alpha=0.5,align='center',label='individual discriminability')
plt.step(range(1,14),cum_discr,where='mid',label='cumulative discriminability')
plt.ylabel('discriminability ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1,1.1])
plt.legend(loc='best')
plt.show()
#第五步:選取前k個特征值所對應的特征向量,構造一個dXk維的轉換矩陣W,其中特征向量以列的形式排列
w=np.hstack((eigen_pairs[0][1][:,np.newaxis].real,eigen_pairs[1][1][:,np.newaxis].real))#選取前2個特征,構建13X2維的映射矩陣W
print ('Matrix W:\n',w)
#第六步:使用轉換矩陣W將樣本映射到新的特征子空間
X_train_lda=X_train_std.dot(w)
X_test_lda=X_test_std.dot(w)
colors=['r','b','g']
markers=['s','x','o']
for l,c,m in zip(np.unique(y_train),colors,markers):plt.scatter(X_train_lda[y_train == l, 0],X_train_lda[y_train == l, 1],c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='upper right')
plt.show()
#第五步:轉換后的數據集進行線性訓練
lr=LogisticRegression()
lr.fit(X_train_lda,y_train)
print ('Training accuracy:',lr.score(X_train_lda, y_train))
print ('Test accuracy:',lr.score(X_test_lda, y_test))
結果:
('class labels:', array([1, 2, 3], dtype=int64)) MV 1: [ 0.9259 -0.3091 0.2592 -0.7989 0.3039 0.9608 1.0515 -0.6306 0.53540.2209 0.4855 0.798 1.2017] MV 2: [-0.8727 -0.3854 -0.4437 0.2481 -0.2409 -0.1059 0.0187 -0.0164 0.1095-0.8796 0.4392 0.2776 -0.7016] MV 3: [ 0.1637 0.8929 0.3249 0.5658 -0.01 -0.9499 -1.228 0.7436 -0.76520.979 -1.1698 -1.3007 -0.3912] class label distribution:[40 49 35] Within-class scatter matrix: 13x13 Between-class scatter matrix: 13x13 Eigenvalues in decreasing order:452.721581245 156.43636122 1.07585370555e-13 4.43873563999e-14 2.87266009341e-14 2.84217094304e-14 2.40168676571e-14 1.59453089024e-14 1.59453089024e-14 9.93723443031e-15 9.93723443031e-15 2.82769841287e-15 2.82769841287e-15 ('Matrix W:\n', array([[-0.0662, -0.3797],[ 0.0386, -0.2206],[-0.0217, -0.3816],[ 0.184 , 0.3018],[-0.0034, 0.0141],[ 0.2326, 0.0234],[-0.7747, 0.1869],[-0.0811, 0.0696],[ 0.0875, 0.1796],[ 0.185 , -0.284 ],[-0.066 , 0.2349],[-0.3805, 0.073 ],[-0.3285, -0.5971]])) ('Training accuracy:', 0.99193548387096775) ('Test accuracy:', 1.0)總結
以上是生活随笔為你收集整理的【Python-ML】抽取最优化分类的特征子空间的LDA方法的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 【Python-ML】SKlearn库特
- 下一篇: 【Python-ML】SKlearn库特