【机器学习基础】数学推导+纯Python实现机器学习算法2:逻辑回归
????自本系列第一講推出以來(lái),得到了不少同學(xué)的反響和贊成,也有同學(xué)留言說(shuō)最好能把數(shù)學(xué)推導(dǎo)部分寫(xiě)的詳細(xì)點(diǎn),筆者只能說(shuō)盡力,因?yàn)榇蚬綄?shí)在是太浪費(fèi)時(shí)間了。。本節(jié)要和大家一起學(xué)習(xí)的是邏輯(logistic)回歸模型,繼續(xù)按照手推公式+純 Python 的寫(xiě)作套路。
????邏輯回歸本質(zhì)上跟邏輯這個(gè)詞不是很搭邊,叫這個(gè)名字完全是直譯過(guò)來(lái)形成的。那該怎么叫呢?其實(shí)邏輯回歸本名應(yīng)該叫對(duì)數(shù)幾率回歸,是線(xiàn)性回歸的一種推廣,所以我們?cè)诮y(tǒng)計(jì)學(xué)上也稱(chēng)之為廣義線(xiàn)性模型。眾多周知的是,線(xiàn)性回歸針對(duì)的是標(biāo)簽為連續(xù)值的機(jī)器學(xué)習(xí)任務(wù),那如果我們想用線(xiàn)性模型來(lái)做分類(lèi)任何可行嗎?答案當(dāng)然是肯定的。
sigmoid 函數(shù)
????相較于線(xiàn)性回歸的因變量 y 為連續(xù)值,邏輯回歸的因變量則是一個(gè) 0/1 的二分類(lèi)值,這就需要我們建立一種映射將原先的實(shí)值轉(zhuǎn)化為 0/1 值。這時(shí)候就要請(qǐng)出我們熟悉的 sigmoid 函數(shù)了:
????其函數(shù)圖形如下:
????除了長(zhǎng)的很優(yōu)雅之外,sigmoid 函數(shù)還有一個(gè)很好的特性就是其求導(dǎo)計(jì)算等于下式,這給我們后續(xù)求交叉熵?fù)p失的梯度時(shí)提供了很大便利。
邏輯回歸模型的數(shù)學(xué)推導(dǎo)
????由 sigmoid 函數(shù)可知邏輯回歸模型的基本形式為:
????稍微對(duì)上式做一下轉(zhuǎn)換:
????下面將 y 視為類(lèi)后驗(yàn)概率 p(y = 1 | x),則上式可以寫(xiě)為:
????則有:
????將上式進(jìn)行簡(jiǎn)單綜合,可寫(xiě)成如下形式:
????寫(xiě)成對(duì)數(shù)形式就是我們熟知的交叉熵?fù)p失函數(shù)了,這也是交叉熵?fù)p失的推導(dǎo)由來(lái):
????最優(yōu)化上式子本質(zhì)上就是我們統(tǒng)計(jì)上所說(shuō)的求其極大似然估計(jì),可基于上式分別關(guān)于 W 和b 求其偏導(dǎo)可得:
????基于 W 和 b 的梯度進(jìn)行權(quán)值更新即可求導(dǎo)參數(shù)的最優(yōu)值,使得損失函數(shù)最小化,也即求得參數(shù)的極大似然估計(jì),殊途同歸啊。
邏輯回歸的 Python 實(shí)現(xiàn)
????跟上一講寫(xiě)線(xiàn)性模型一樣,在實(shí)際動(dòng)手寫(xiě)之前我們需要理清楚思路。要寫(xiě)一個(gè)完整的邏輯回歸模型我們需要:sigmoid函數(shù)、模型主體、參數(shù)初始化、基于梯度下降的參數(shù)更新訓(xùn)練、數(shù)據(jù)測(cè)試與可視化展示。
????先定義一個(gè) sigmoid 函數(shù):
import numpy as np def sigmoid(x):z = 1 / (1 + np.exp(-x)) ? ?return z????定義模型參數(shù)初始化函數(shù):
def initialize_params(dims):W = np.zeros((dims, 1))b = 0return W, b????定義邏輯回歸模型主體部分,包括模型計(jì)算公式、損失函數(shù)和參數(shù)的梯度公式:
def logistic(X, y, W, b):num_train = X.shape[0]num_feature = X.shape[1]a = sigmoid(np.dot(X, W) + b)cost = -1/num_train * np.sum(y*np.log(a) + (1-y)*np.log(1-a))dW = np.dot(X.T, (a-y))/num_traindb = np.sum(a-y)/num_traincost = np.squeeze(cost)return a, cost, dW, db????定義基于梯度下降的參數(shù)更新訓(xùn)練過(guò)程:
def logistic_train(X, y, learning_rate, epochs): ? ?# 初始化模型參數(shù)W, b = initialize_params(X.shape[1]) ?cost_list = [] ?# 迭代訓(xùn)練for i in range(epochs): ? ? ?# 計(jì)算當(dāng)前次的模型計(jì)算結(jié)果、損失和參數(shù)梯度a, cost, dW, db = logistic(X, y, W, b) ? ?# 參數(shù)更新W = W -learning_rate * dWb = b -learning_rate * db ? ? ? ?# 記錄損失if i % 100 == 0:cost_list.append(cost) ?# 打印訓(xùn)練過(guò)程中的損失if i % 100 == 0:print('epoch %d cost %f' % (i, cost))# 保存參數(shù)params = { ? ? ? ? ? ?'W': W, ? ? ? ? ? ?'b': b} ? ? ? ?# 保存梯度grads = { ? ? ? ? ? ?'dW': dW, ? ? ? ? ? ?'db': db} ? ? ? ? ?return cost_list, params, grads????定義對(duì)測(cè)試數(shù)據(jù)的預(yù)測(cè)函數(shù):
def predict(X, params):y_prediction = sigmoid(np.dot(X, params['W']) + params['b'])for i in range(len(y_prediction)): ? ? ? ?if y_prediction[i] > 0.5:y_prediction[i] = 1else:y_prediction[i] = 0return y_prediction????使用 sklearn 生成模擬的二分類(lèi)數(shù)據(jù)集進(jìn)行模型訓(xùn)練和測(cè)試:
import matplotlib.pyplot as plt from sklearn.datasets.samples_generator import make_classification X,labels=make_classification(n_samples=100, n_features=2, n_redundant=0, n_informative=2, random_state=1, n_clusters_per_class=2) rng=np.random.RandomState(2) X+=2*rng.uniform(size=X.shape)unique_lables=set(labels) colors=plt.cm.Spectral(np.linspace(0, 1, len(unique_lables))) for k, col in zip(unique_lables, colors):x_k=X[labels==k]plt.plot(x_k[:, 0], x_k[:, 1], 'o', markerfacecolor=col, markeredgecolor="k",markersize=14) plt.title('data by make_classification()') plt.show()????數(shù)據(jù)分布展示如下:
????對(duì)數(shù)據(jù)進(jìn)行簡(jiǎn)單的訓(xùn)練集與測(cè)試集的劃分:
offset = int(X.shape[0] * 0.9) X_train, y_train = X[:offset], labels[:offset] X_test, y_test = X[offset:], labels[offset:] y_train = y_train.reshape((-1,1)) y_test = y_test.reshape((-1,1))print('X_train=', X_train.shape) print('X_test=', X_test.shape) print('y_train=', y_train.shape) print('y_test=', y_test.shape)????對(duì)訓(xùn)練集進(jìn)行訓(xùn)練:
cost_list, params, grads = lr_train(X_train, y_train, 0.01, 1000)????迭代過(guò)程如下:
????對(duì)測(cè)試集數(shù)據(jù)進(jìn)行預(yù)測(cè):
y_prediction = predict(X_test, params) print(y_prediction)????預(yù)測(cè)結(jié)果如下:
????定義一個(gè)分類(lèi)準(zhǔn)確率函數(shù)對(duì)訓(xùn)練集和測(cè)試集的準(zhǔn)確率進(jìn)行評(píng)估:
def accuracy(y_test, y_pred):correct_count = 0for i in range(len(y_test)): ? ? ? ?for j in range(len(y_pred)): ? ? ? ? ? ?if y_test[i] == y_pred[j] and i == j:correct_count +=1accuracy_score = correct_count / len(y_test) ? ?return accuracy_score# 打印訓(xùn)練準(zhǔn)確率 accuracy_score_train = accuracy(y_train, y_train_pred) print(accuracy_score_train)????查看測(cè)試集準(zhǔn)確率:
accuracy_score_test = accuracy(y_test, y_prediction) print(accuracy_score_test)
????沒(méi)有進(jìn)行交叉驗(yàn)證,測(cè)試集準(zhǔn)確率存在一定的偶然性哈。
????最后我們定義個(gè)繪制模型決策邊界的圖形函數(shù)對(duì)訓(xùn)練結(jié)果進(jìn)行可視化展示:
def plot_logistic(X_train, y_train, params):n = X_train.shape[0]xcord1 = []ycord1 = []xcord2 = []ycord2 = [] ? ?for i in range(n): ? ? ? ?if y_train[i] == 1:xcord1.append(X_train[i][0])ycord1.append(X_train[i][1]) ? ? ? ?else:xcord2.append(X_train[i][0])ycord2.append(X_train[i][1])fig = plt.figure()ax = fig.add_subplot(111)ax.scatter(xcord1, ycord1,s=32, c='red')ax.scatter(xcord2, ycord2, s=32, c='green')x = np.arange(-1.5, 3, 0.1)y = (-params['b'] - params['W'][0] * x) / params['W'][1]ax.plot(x, y)plt.xlabel('X1')plt.ylabel('X2')plt.show()plot_logistic(X_train, y_train, params)組件封裝成邏輯回歸類(lèi)
????將以上實(shí)現(xiàn)過(guò)程使用一個(gè) python 類(lèi)進(jìn)行封裝:
import numpy as np import matplotlib.pyplot as plt from sklearn.datasets.samples_generator import make_classificationclass logistic_regression(): ? ?def __init__(self): ? ? ? ?passdef sigmoid(self, x):z = 1 / (1 + np.exp(-x)) ? ? ? ?return z ? ?def initialize_params(self, dims):W = np.zeros((dims, 1))b = 0return W, b ? ?def logistic(self, X, y, W, b):num_train = X.shape[0]num_feature = X.shape[1]a = self.sigmoid(np.dot(X, W) + b)cost = -1 / num_train * np.sum(y * np.log(a) + (1 - y) * np.log(1 - a))dW = np.dot(X.T, (a - y)) / num_traindb = np.sum(a - y) / num_traincost = np.squeeze(cost) ? ? ? ?return a, cost, dW, db ? ?def logistic_train(self, X, y, learning_rate, epochs):W, b = self.initialize_params(X.shape[1])cost_list = [] ? ? ? ?for i in range(epochs):a, cost, dW, db = self.logistic(X, y, W, b)W = W - learning_rate * dWb = b - learning_rate * db ? ? ? ? ? ?if i % 100 == 0:cost_list.append(cost) ? ? ? ? ? ?if i % 100 == 0:print('epoch %d cost %f' % (i, cost))params = {'W': W,'b': b}grads = { ? ? ? ? ? ?'dW': dW, ? ? ? ? ? ?'db': db} ? ? ? ?return cost_list, params, grads ? ?def predict(self, X, params):y_prediction = self.sigmoid(np.dot(X, params['W']) + params['b']) ? ? ? ?for i in range(len(y_prediction)): ? ? ? ? ? ?if y_prediction[i] > 0.5:y_prediction[i] = 1else:y_prediction[i] = 0return y_prediction ? ?def accuracy(self, y_test, y_pred):correct_count = 0for i in range(len(y_test)): ? ? ? ? ? ?for j in range(len(y_pred)): ? ? ? ? ? ? ? ?if y_test[i] == y_pred[j] and i == j:correct_count += 1accuracy_score = correct_count / len(y_test) ? ? ? ?return accuracy_score ? ?def create_data(self):X, labels = make_classification(n_samples=100, n_features=2, n_redundant=0, n_informative=2, random_state=1, n_clusters_per_class=2)labels = labels.reshape((-1, 1))offset = int(X.shape[0] * 0.9)X_train, y_train = X[:offset], labels[:offset]X_test, y_test = X[offset:], labels[offset:] ? ? ? ?return X_train, y_train, X_test, y_test ? ?def plot_logistic(self, X_train, y_train, params):n = X_train.shape[0]xcord1 = []ycord1 = []xcord2 = []ycord2 = [] ? ? ? ?for i in range(n): ? ? ? ? ? ?if y_train[i] == 1:xcord1.append(X_train[i][0])ycord1.append(X_train[i][1]) ? ? ? ? ? ?else:xcord2.append(X_train[i][0])ycord2.append(X_train[i][1])fig = plt.figure()ax = fig.add_subplot(111)ax.scatter(xcord1, ycord1, s=32, c='red')ax.scatter(xcord2, ycord2, s=32, c='green')x = np.arange(-1.5, 3, 0.1)y = (-params['b'] - params['W'][0] * x) / params['W'][1]ax.plot(x, y)plt.xlabel('X1')plt.ylabel('X2')plt.show()if __name__ == "__main__":model = logistic_regression()X_train, y_train, X_test, y_test = model.create_data()print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)cost_list, params, grads = model.logistic_train(X_train, y_train, 0.01, 1000)print(params)y_train_pred = model.predict(X_train, params)accuracy_score_train = model.accuracy(y_train, y_train_pred)print('train accuracy is:', accuracy_score_train)y_test_pred = model.predict(X_test, params)accuracy_score_test = model.accuracy(y_test, y_test_pred)print('test accuracy is:', accuracy_score_test)model.plot_logistic(X_train, y_train, params)????
????好了,關(guān)于邏輯回歸的內(nèi)容,筆者就介紹到這,至于如何在實(shí)戰(zhàn)中檢驗(yàn)邏輯回歸的分類(lèi)效果,還需各位親自試一試。多說(shuō)一句,邏輯回歸模型與感知機(jī)、神經(jīng)網(wǎng)絡(luò)和深度學(xué)習(xí)有著千絲萬(wàn)縷的關(guān)系,作為基礎(chǔ)的機(jī)器學(xué)習(xí)模型,希望大家能夠牢固掌握其數(shù)學(xué)推導(dǎo)和手動(dòng)實(shí)現(xiàn)方式,在此基礎(chǔ)上再去調(diào)用 sklearn 中的 LogisticRegression 模塊,必能必能裨補(bǔ)闕漏,有所廣益。
參考資料:
周志華 機(jī)器學(xué)習(xí)
往期精彩回顧適合初學(xué)者入門(mén)人工智能的路線(xiàn)及資料下載機(jī)器學(xué)習(xí)及深度學(xué)習(xí)筆記等資料打印機(jī)器學(xué)習(xí)在線(xiàn)手冊(cè)深度學(xué)習(xí)筆記專(zhuān)輯《統(tǒng)計(jì)學(xué)習(xí)方法》的代碼復(fù)現(xiàn)專(zhuān)輯 AI基礎(chǔ)下載機(jī)器學(xué)習(xí)的數(shù)學(xué)基礎(chǔ)專(zhuān)輯獲取一折本站知識(shí)星球優(yōu)惠券,復(fù)制鏈接直接打開(kāi):https://t.zsxq.com/yFQV7am本站qq群1003271085。加入微信群請(qǐng)掃碼進(jìn)群: 與50位技術(shù)專(zhuān)家面對(duì)面20年技術(shù)見(jiàn)證,附贈(zèng)技術(shù)全景圖總結(jié)
以上是生活随笔為你收集整理的【机器学习基础】数学推导+纯Python实现机器学习算法2:逻辑回归的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 【Python基础】Python高级特性
- 下一篇: 【机器学习基础】数学推导+纯Python