监督学习 | SVM 之支持向量机Sklearn实现
文章目錄
- Sklearn 支持向量機(jī)
- 1. 支持向量機(jī)分類
- 1.1 線性 SVM 分類
- 1.2 非線性 SVM 分類
- 1.2.1 多項(xiàng)式內(nèi)核
- 1.2.2 高斯 RBF 內(nèi)核
- 2. 支持向量機(jī)回歸
- 2.1 線性 SVM 回歸
- 2.2 非線性 SVM 回歸
- 2.2.1 多項(xiàng)式內(nèi)核
- 參考資料
相關(guān)文章:
機(jī)器學(xué)習(xí) | 目錄
機(jī)器學(xué)習(xí) | 網(wǎng)絡(luò)搜索及可視化
監(jiān)督學(xué)習(xí) | SVM 之線性支持向量機(jī)原理
監(jiān)督學(xué)習(xí) | SVM 之非線性支持向量機(jī)原理
Sklearn 支持向量機(jī)
Sklearn.svm 中用于分類的 SVM 方法:
svm.LinearSVC: Linear Support Vector Classification.
svm.NuSVC: Nu-Support Vector Classification.
svm.OneClassSVM: Unsupervised Outlier Detection.
svm.SVC: C-Support Vector Classification.
Sklearn.svm 中用于回歸的 SVM 方法:
svm.LinearSVR: Linear Support Vector Regression.
svm.NuSVR: Nu Support Vector Regression.
svm.SVR: Epsilon-Support Vector Regression.
svm.l1_min_c: Return the lowest bound for C such that for C in (l1_min_C, infinity) the model is guaranteed not to be empty.
可以通過(guò) model.support_vectors_ 查看支持向量。
SVM 對(duì)特征的縮放非常敏感,如下圖所示,在左圖中,垂直刻度比水平刻度大得多,因此可能的分離超平面接近于水平。在特征縮放后(如使用 Sklearn 的 StandardScaler)后(右圖),決策邊界看起來(lái)好看很多。
圖1 特征縮放前后的分離間隔常用參數(shù)解釋:
CCC:懲罰系數(shù),用于近似線性數(shù)據(jù)中。在近似線性支持向量機(jī)中,損失函數(shù)由兩部分組成:最大化支持向量間隔的大小以及 C×C\timesC× 進(jìn)入分類邊界的數(shù)據(jù)點(diǎn)的懲罰大小。因此當(dāng) CCC 越大時(shí),對(duì)進(jìn)入邊界的數(shù)據(jù)懲罰越大,表現(xiàn)為進(jìn)入分類邊界的數(shù)據(jù)越少(分類間隔越小)。 CCC 值的確定與問(wèn)題有關(guān),如醫(yī)療模型或垃圾郵件分類問(wèn)題。
losslossloss:損失函數(shù)。線性支持向量機(jī)中的目標(biāo)函數(shù)可以分為兩部分,第一部分為損失函數(shù),第二部分為正則化項(xiàng)。默認(rèn)的損失函數(shù)為合頁(yè)損失函數(shù)(hinge loss function)
kernelkernelkernel:非線性支持向量機(jī)中的核函數(shù)。常用的核函數(shù)由:線性核(即變?yōu)榫€性支持向量機(jī))、多項(xiàng)式核、高斯 RBF 核、Sigmoid 核。
gammagammagamma:高斯核中的參數(shù)。γ=12σ2\gamma = \frac{1}{2\sigma^2}γ=2σ21?,σ\sigmaσ 即正態(tài)分布中圖像的橫向?qū)挾?#xff0c;所以 gammagammagamma 與 σ\sigmaσ 呈反比,當(dāng) gammagammagamma 越大時(shí),正態(tài)圖越高瘦; gammagammagamma 越小時(shí),正態(tài)圖越矮胖。在 SVM 中表現(xiàn)如下:
其截面為:
因此 gamma 越大,越可能過(guò)擬合; gamma 越小,越可能欠擬合。
1. 支持向量機(jī)分類
1.1 線性 SVM 分類
sklearn.svm.LinearSVC
參數(shù)設(shè)置:
C: float, optional (default=1.0)
【懲罰參數(shù),默認(rèn)為1,C越大間隔越小,間隔中的實(shí)例也越少】loss: string, ‘hinge’ or ‘squared_hinge’ (default=’squared_hinge’)
【loss 參數(shù)應(yīng)設(shè)為 ‘hinge’ ,因?yàn)樗皇悄J(rèn)值】dual bool, (default=True)
【默認(rèn) True除非特征數(shù)量比訓(xùn)練實(shí)例還多,否則應(yīng)設(shè)為 False】其他參數(shù)見(jiàn)官方文檔。
LinearSVC 類會(huì)對(duì)偏執(zhí)項(xiàng)進(jìn)行正則化,所以需要先減去平均值,使訓(xùn)練集集中。如果使用 StandardScaler 會(huì)自動(dòng)進(jìn)行這一步。
LinearSVC() 相當(dāng)于 SVC(kernel=’linear’) ,但這要慢得多。
import numpy as np from sklearn import datasets from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.svm import LinearSVCiris = datasets.load_iris() X = iris["data"][:, (2, 3)] # petal length, petal width y = (iris["target"] == 2).astype(np.float64) # Iris-Virginicasvm_clf = Pipeline([("scaler", StandardScaler()),("linear_svc", LinearSVC(C=1, loss="hinge", random_state=42)),])svm_clf.fit(X, y) Pipeline(memory=None,steps=[('scaler',StandardScaler(copy=True, with_mean=True, with_std=True)),('linear_svc',LinearSVC(C=1, class_weight=None, dual=True,fit_intercept=True, intercept_scaling=1,loss='hinge', max_iter=1000, multi_class='ovr',penalty='l2', random_state=42, tol=0.0001,verbose=0))],verbose=False) svm_clf.predict([[5.5, 1.7]]) array([1.])與 Logistic 回歸分類器不同的是,SVM 分類器不會(huì)輸出每個(gè)類別的概率。
1.2 非線性 SVM 分類
雖然在許多情況下,線性 SVM 分類器是有效的,并且通常出人意料的好,但是,有很多數(shù)據(jù)集是非線性可分的。因此需要非線性支持向量機(jī)將數(shù)據(jù)變成線性可分的,如下圖所示,利用多項(xiàng)式對(duì)數(shù)據(jù)進(jìn)行變換:
圖2 對(duì)非線性數(shù)據(jù)進(jìn)行線性變換要使用 Sklearn 實(shí)現(xiàn)這個(gè)想法,有兩種方法:第一種是首先使用多項(xiàng)式變換并對(duì)特征進(jìn)行縮放,接著就可以返回線性 linear_svc 分類器了;第二種是直接使用 SVC 分類器并選定多項(xiàng)式內(nèi)核。
我們首先來(lái)看第一種,使用衛(wèi)星數(shù)據(jù)來(lái)進(jìn)行測(cè)試一下:
from sklearn.datasets import make_moons import matplotlib.pyplot as pltX, y = make_moons(n_samples=100, noise=0.15, random_state=42)def plot_dataset(X, y, axes):plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")plt.axis(axes)plt.grid(True, which='both')plt.xlabel(r"$x_1$", fontsize=20)plt.ylabel(r"$x_2$", fontsize=20, rotation=0)plot_dataset(X, y, [-1.5, 2.5, -1, 1.5]) plt.show() from sklearn.datasets import make_moons from sklearn.pipeline import Pipeline from sklearn.preprocessing import PolynomialFeaturespolynomial_svm_clf = Pipeline([("poly_features", PolynomialFeatures(degree=3)),("scaler", StandardScaler()),("svm_clf", LinearSVC(C=10, loss="hinge", random_state=42))])polynomial_svm_clf.fit(X, y) Pipeline(memory=None,steps=[('poly_features',PolynomialFeatures(degree=3, include_bias=True,interaction_only=False, order='C')),('scaler',StandardScaler(copy=True, with_mean=True, with_std=True)),('svm_clf',LinearSVC(C=10, class_weight=None, dual=True,fit_intercept=True, intercept_scaling=1,loss='hinge', max_iter=1000, multi_class='ovr',penalty='l2', random_state=42, tol=0.0001,verbose=0))],verbose=False) def plot_predictions(clf, axes):x0s = np.linspace(axes[0], axes[1], 100)x1s = np.linspace(axes[2], axes[3], 100)x0, x1 = np.meshgrid(x0s, x1s)X = np.c_[x0.ravel(), x1.ravel()]y_pred = clf.predict(X).reshape(x0.shape)y_decision = clf.decision_function(X).reshape(x0.shape)plt.contourf(x0, x1, y_pred, cmap=plt.cm.brg, alpha=0.2)plt.contourf(x0, x1, y_decision, cmap=plt.cm.brg, alpha=0.1)plot_predictions(polynomial_svm_clf, [-1.5, 2.5, -1, 1.5]) plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])plt.show() 圖3 使用多項(xiàng)式特征的線性 LVM 分類器另外一種方法是使用 SVC 函數(shù)實(shí)現(xiàn)。
sklearn.svm.SVC
參數(shù)設(shè)置:
C: float, optional (default=1.0)
Penalty parameter C of the error term.kernel: string, optional (default=’rbf’)
Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable. If none is given, ‘rbf’ will be used. If a callable is given it is used to pre-compute the kernel matrix from data matrices; that matrix should be an array of shape (n_samples, n_samples).degree: int, optional (default=3)
Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels.gamma: {‘scale’, ‘a(chǎn)uto’} or float, optional (default=’scale’)
Kernel coefficient for ‘rbf’, ‘poly’ and ‘sigmoid’.if gamma='scale' (default) is passed then it uses 1 / (n_features * X.var()) as value of gamma,if ‘a(chǎn)uto’, uses 1 / n_features. Changed in version 0.22: The default value of gamma changed from ‘a(chǎn)uto’ to ‘scale’.coef0: float, optional (default=0.0)
Independent term in kernel function. It is only significant in ‘poly’ and ‘sigmoid’.【控制模型受高階多項(xiàng)式還是低階多項(xiàng)式影響的程度】其他參數(shù)設(shè)置見(jiàn)官方文檔。
尋找正確的超參數(shù)值的常用方法是網(wǎng)絡(luò)搜索。先進(jìn)行一次粗略的網(wǎng)絡(luò)搜索,然后在最好的值附近展開(kāi)一輪更精細(xì)的網(wǎng)絡(luò)搜索,這樣通常會(huì)快一些。
1.2.1 多項(xiàng)式內(nèi)核
使用 SVC(kernel=“poly”, degree=3) 進(jìn)行非線性多項(xiàng)式內(nèi)核的 SVM 分類:
from sklearn.svm import SVC from sklearn.datasets import make_moons import matplotlib.pyplot as pltX, y = make_moons(n_samples=100, noise=0.15, random_state=42)poly_kernel_svm_clf = Pipeline([("scaler", StandardScaler()),("svm_clf", SVC(kernel="poly", degree=3, coef0=1, C=5))]) poly_kernel_svm_clf.fit(X, y) Pipeline(memory=None,steps=[('scaler',StandardScaler(copy=True, with_mean=True, with_std=True)),('svm_clf',SVC(C=5, cache_size=200, class_weight=None, coef0=1,decision_function_shape='ovr', degree=3,gamma='auto_deprecated', kernel='poly', max_iter=-1,probability=False, random_state=None, shrinking=True,tol=0.001, verbose=False))],verbose=False) poly100_kernel_svm_clf = Pipeline([("scaler", StandardScaler()),("svm_clf", SVC(kernel="poly", degree=10, coef0=100, C=5))]) poly100_kernel_svm_clf.fit(X, y) Pipeline(memory=None,steps=[('scaler',StandardScaler(copy=True, with_mean=True, with_std=True)),('svm_clf',SVC(C=5, cache_size=200, class_weight=None, coef0=100,decision_function_shape='ovr', degree=10,gamma='auto_deprecated', kernel='poly', max_iter=-1,probability=False, random_state=None, shrinking=True,tol=0.001, verbose=False))],verbose=False) plt.figure(figsize=(11, 4))plt.subplot(121) plot_predictions(poly_kernel_svm_clf, [-1.5, 2.5, -1, 1.5]) plot_dataset(X, y, [-1.5, 2.5, -1, 1.5]) plt.title(r"$d=3, r=1, C=5$", fontsize=18)plt.subplot(122) plot_predictions(poly100_kernel_svm_clf, [-1.5, 2.5, -1, 1.5]) plot_dataset(X, y, [-1.5, 2.5, -1, 1.5]) plt.title(r"$d=10, r=100, C=5$", fontsize=18)plt.show() 圖4 多項(xiàng)式核的 SVM 分類器1.2.2 高斯 RBF 內(nèi)核
使用 SVC(kernel=‘rbf’, gamma=5, C=0.001) 對(duì)非線性數(shù)據(jù)進(jìn)行分類:
from sklearn.datasets import make_moons import matplotlib.pyplot as pltX, y = make_moons(n_samples=100, noise=0.15, random_state=42)rbf_kernel_svm_clf = Pipeline([("scaler", StandardScaler()),("svm_clf", SVC(kernel="rbf", gamma=5, C=0.001))]) rbf_kernel_svm_clf.fit(X, y) Pipeline(memory=None,steps=[('scaler',StandardScaler(copy=True, with_mean=True, with_std=True)),('svm_clf',SVC(C=0.001, cache_size=200, class_weight=None, coef0=0.0,decision_function_shape='ovr', degree=3, gamma=5,kernel='rbf', max_iter=-1, probability=False,random_state=None, shrinking=True, tol=0.001,verbose=False))],verbose=False)實(shí)現(xiàn)簡(jiǎn)單的網(wǎng)絡(luò)搜索:
from sklearn.svm import SVCgamma1, gamma2 = 0.1, 5 C1, C2 = 0.001, 1000 hyperparams = (gamma1, C1), (gamma1, C2), (gamma2, C1), (gamma2, C2)svm_clfs = [] for gamma, C in hyperparams:rbf_kernel_svm_clf = Pipeline([("scaler", StandardScaler()),("svm_clf", SVC(kernel="rbf", gamma=gamma, C=C))])rbf_kernel_svm_clf.fit(X, y)svm_clfs.append(rbf_kernel_svm_clf)plt.figure(figsize=(11, 7))for i, svm_clf in enumerate(svm_clfs):plt.subplot(221 + i)plot_predictions(svm_clf, [-1.5, 2.5, -1, 1.5])plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])gamma, C = hyperparams[i]plt.title(r"$\gamma = {}, C = {}$".format(gamma, C), fontsize=16)plt.show() 圖5 高斯內(nèi)核 SVM2. 支持向量機(jī)回歸
SVM 算法非常全面:它不僅支持線性和非線性分類,而且還支持線性和非線性回歸。訣竅在于將目標(biāo)反轉(zhuǎn)一下:不再是嘗試擬合最大分離間隔,SVM 回歸要做的是讓盡可能多的實(shí)例位于間隔中間,同時(shí)限制間隔違例。間隔的寬度受超參數(shù) ε\varepsilonε 控制。
2.1 線性 SVM 回歸
sklearn.svm.LinearSVR (訓(xùn)練數(shù)據(jù)需要先縮放并集中)
參數(shù)設(shè)置:
epsilon: float, optional (default=0.0)
Epsilon parameter in the epsilon-insensitive loss function. Note that the value of this parameter depends on the scale of the target variable y. If unsure, set epsilon=0. 【間隔寬度】tol: float, optional (default=1e-4)
Tolerance for stopping criteria.C: float, optional (default=1.0)
Penalty parameter C of the error term. The penalty is a squared l2 penalty. The bigger this parameter, the less regularization is used.loss: string, optional (default=’epsilon_insensitive’)
Specifies the loss function. The epsilon-insensitive loss (standard SVR) is the L1 loss, while the squared epsilon-insensitive loss (‘squared_epsilon_insensitive’) is the L2 loss.dual: bool, (default=True)
Select the algorithm to either solve the dual or primal optimization problem. Prefer dual=False when n_samples > n_features. from sklearn.svm import LinearSVRlinear_svm_reg = Pipeline([("scaler", StandardScaler()),("svm_reg", LinearSVR(epsilon=1.5))]) linear_svm_reg.fit(X, y)下圖顯示了用隨機(jī)線性數(shù)據(jù)訓(xùn)練的兩個(gè)線性 SVM回歸模型,一個(gè)間隔較大( ε=1.5\varepsilon=1.5ε=1.5 ),一個(gè)間隔較小( ε=0.5\varepsilon=0.5ε=0.5 )(訓(xùn)練數(shù)據(jù)需要先縮放并集中)。
圖6 SVM 回歸繪圖代碼:
np.random.seed(42) m = 50 X = 2 * np.random.rand(m, 1) y = (4 + 3 * X + np.random.randn(m, 1)).ravel() from sklearn.svm import LinearSVRsvm_reg = LinearSVR(epsilon=1.5, random_state=42) svm_reg.fit(X, y) LinearSVR(C=1.0, dual=True, epsilon=1.5, fit_intercept=True,intercept_scaling=1.0, loss='epsilon_insensitive', max_iter=1000,random_state=42, tol=0.0001, verbose=0) svm_reg1 = LinearSVR(epsilon=1.5, random_state=42) svm_reg2 = LinearSVR(epsilon=0.5, random_state=42) svm_reg1.fit(X, y) svm_reg2.fit(X, y)def find_support_vectors(svm_reg, X, y):y_pred = svm_reg.predict(X)off_margin = (np.abs(y - y_pred) >= svm_reg.epsilon)return np.argwhere(off_margin)svm_reg1.support_ = find_support_vectors(svm_reg1, X, y) svm_reg2.support_ = find_support_vectors(svm_reg2, X, y)eps_x1 = 1 eps_y_pred = svm_reg1.predict([[eps_x1]]) def plot_svm_regression(svm_reg, X, y, axes):x1s = np.linspace(axes[0], axes[1], 100).reshape(100, 1)y_pred = svm_reg.predict(x1s)plt.plot(x1s, y_pred, "k-", linewidth=2, label=r"$\hat{y}$")plt.plot(x1s, y_pred + svm_reg.epsilon, "k--")plt.plot(x1s, y_pred - svm_reg.epsilon, "k--")plt.scatter(X[svm_reg.support_], y[svm_reg.support_], s=180, facecolors='#FFAAAA')plt.plot(X, y, "bo")plt.xlabel(r"$x_1$", fontsize=18)plt.legend(loc="upper left", fontsize=18)plt.axis(axes)plt.figure(figsize=(9, 4)) plt.subplot(121) plot_svm_regression(svm_reg1, X, y, [0, 2, 3, 11]) plt.title(r"$\epsilon = {}$".format(svm_reg1.epsilon), fontsize=18) plt.ylabel(r"$y$", fontsize=18, rotation=0) #plt.plot([eps_x1, eps_x1], [eps_y_pred, eps_y_pred - svm_reg1.epsilon], "k-", linewidth=2) plt.annotate('', xy=(eps_x1, eps_y_pred), xycoords='data',xytext=(eps_x1, eps_y_pred - svm_reg1.epsilon),textcoords='data', arrowprops={'arrowstyle': '<->', 'linewidth': 1.5}) plt.text(0.91, 5.6, r"$\epsilon$", fontsize=20) plt.subplot(122) plot_svm_regression(svm_reg2, X, y, [0, 2, 3, 11]) plt.title(r"$\epsilon = {}$".format(svm_reg2.epsilon), fontsize=18) plt.show()2.2 非線性 SVM 回歸
sklearn.svm.SVR
參數(shù)設(shè)置:
kernel: string, optional (default=’rbf’)
Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable. If none is given, ‘rbf’ will be used. If a callable is given it is used to precompute the kernel matrix.degree: int, optional (default=3)
Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels.gamma: {‘scale’, ‘a(chǎn)uto’} or float, optional
(default=’scale’)
coef0: float, optional (default=0.0)
Independent term in kernel function. It is only significant in ‘poly’ and ‘sigmoid’.tol: float, optional (default=1e-3)
Tolerance for stopping criterion.C: float, optional (default=1.0)
Penalty parameter C of the error term.epsilon: float, optional (default=0.1)
Epsilon in the epsilon-SVR model. It specifies the epsilon-tube within which no penalty is associated in the training loss function with points predicted within a distance epsilon from the actual value. 【它指定了epsilon-tube,其中訓(xùn)練損失函數(shù)中沒(méi)有懲罰與在實(shí)際值的距離epsilon內(nèi)預(yù)測(cè)的點(diǎn)。】2.2.1 多項(xiàng)式內(nèi)核
from sklearn.svm import SVRsvm_poly_reg = SVR(kernel="poly", degree=2, C=100, epsilon=0.1, gamma="auto") svm_poly_reg.fit(X, y)下面展示了不同懲罰系數(shù)(C)下的 SVM 回歸:
圖7 不同懲罰系數(shù)下的 SVM 回歸代碼如下:
np.random.seed(42) m = 100 X = 2 * np.random.rand(m, 1) - 1 y = (0.2 + 0.1 * X + 0.5 * X**2 + np.random.randn(m, 1)/10).ravel()設(shè)置不同的正則化值(C 值)
from sklearn.svm import SVRsvm_poly_reg1 = SVR(kernel="poly", degree=2, C=100, epsilon=0.1, gamma="auto") svm_poly_reg2 = SVR(kernel="poly", degree=2, C=0.01, epsilon=0.1, gamma="auto") svm_poly_reg1.fit(X, y) svm_poly_reg2.fit(X, y) SVR(C=0.01, cache_size=200, coef0=0.0, degree=2, epsilon=0.1, gamma='auto',kernel='poly', max_iter=-1, shrinking=True, tol=0.001, verbose=False) import matplotlib.pyplot as plt def plot_svm_regression(svm_reg, X, y, axes):x1s = np.linspace(axes[0], axes[1], 100).reshape(100, 1)y_pred = svm_reg.predict(x1s)plt.plot(x1s, y_pred, "k-", linewidth=2, label=r"$\hat{y}$")plt.plot(x1s, y_pred + svm_reg.epsilon, "k--")plt.plot(x1s, y_pred - svm_reg.epsilon, "k--")plt.scatter(X[svm_reg.support_], y[svm_reg.support_], s=180, facecolors='#FFAAAA')plt.plot(X, y, "bo")plt.xlabel(r"$x_1$", fontsize=18)plt.legend(loc="upper left", fontsize=18)plt.axis(axes) plt.figure(figsize=(9, 4)) plt.subplot(121) plot_svm_regression(svm_poly_reg1, X, y, [-1, 1, 0, 1]) plt.title(r"$degree={}, C={}, \epsilon = {}$".format(svm_poly_reg1.degree, svm_poly_reg1.C, svm_poly_reg1.epsilon), fontsize=18) plt.ylabel(r"$y$", fontsize=18, rotation=0) plt.subplot(122) plot_svm_regression(svm_poly_reg2, X, y, [-1, 1, 0, 1]) plt.title(r"$degree={}, C={}, \epsilon = {}$".format(svm_poly_reg2.degree, svm_poly_reg2.C, svm_poly_reg2.epsilon), fontsize=18) plt.show()參考資料
[1] Aurelien Geron, 王靜源, 賈瑋, 邊蕤, 邱俊濤. 機(jī)器學(xué)習(xí)實(shí)戰(zhàn):基于 Scikit-Learn 和 TensorFlow[M]. 北京: 機(jī)械工業(yè)出版社, 2018: 136-144.
總結(jié)
以上是生活随笔為你收集整理的监督学习 | SVM 之支持向量机Sklearn实现的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: typedef的用法
- 下一篇: linux 多源代码文件编译