转正老板让你谈谈你的看法_让我们谈谈逻辑回归
轉(zhuǎn)正老板讓你談?wù)勀愕目捶?/p>
Moving forward from Linear Regression, we will now address the next most important algorithm on our ML journey. The Logistic Regression.
從線性回歸向前發(fā)展,我們現(xiàn)在將解決機器學習旅程中的下一個最重要的算法。 邏輯回歸。
‘Logistic’ in the English language means ‘planning’. Now, how do we interpret that in the context of Machine Learning?
英文中的“物流”是指“計劃”。 現(xiàn)在,我們?nèi)绾卧跈C器學習的背景下對此進行解釋?
Actually, Logistic Regression derives its name from the Logistic function that is implicitly used in the algorithm and is, therefore, called what it is called. (It has nothing to do with the meaning of ‘logistic’, per se).
實際上,Logistic回歸是從該算法中隱式使用的Logistic函數(shù)獲得其名稱的,因此被稱為它。 (本質(zhì)上與“物流”的含義無關(guān))。
Let’s now break down the aspects of Logistic Regression.
現(xiàn)在讓我們分解Logistic回歸的各個方面。
The algorithm is usually used to sort the data into two different possibilities i.e pass/fail, alive/dead, this/that (Binary). While Linear Regression is a predictive algorithm, Logistic Regression is termed as a classification algorithm.
該算法通常用于將數(shù)據(jù)分類為兩種不同的可能性,即通過/失敗,有效/無效,this / that( 二進制 )。 雖然線性回歸是一種預測算法,但邏輯回歸被稱為分類算法 。
How does the algorithm classify? You may ask. To answer that, we will have to probe it in deeper places. Probe into the very function that is the core of the method.
該算法如何分類? 你可能會問。 為了回答這個問題,我們將不得不在更深的地方進行探討。 探討作為方法核心的功能 。
The Logistic function (or the Sigmoid function) is a S-shaped curve that rises pretty quickly and then saturates at a certain level. It takes any real valued number and then maps it between 0 and 1.
Logistic函數(shù)(或Sigmoid函數(shù) )是一條S形曲線,其上升非常快,然后在一定水平上達到飽和。 它采用任何實數(shù)值,然后將其映射為0到1。
It’s mathematically given by : 1 / (1 + e^-value), where e is the base of the natural logarithms.
從數(shù)學上講,它是: 1 /(1 + e ^ -value) ,其中e是自然對數(shù)的底數(shù)。
Now, how does the logistic curve help? Here you go.
現(xiàn)在,邏輯曲線如何幫助? 干得好。
We take the values of x (the training data), plug it in an equation and model the values of y. Exactly like linear regression, except that y here is either 0 or 1. (We are sorting the data into possibilities, as I mentioned before).
我們?nèi)的值( 訓練數(shù)據(jù) ),將其插入方程式并為y的值建模。 與線性回歸完全一樣,不同之處在于y此處為0或1。(如前所述,我們正在將數(shù)據(jù)分類為可能性)。
Here’s an example of a Logistic Regression equation :
這是Logistic回歸方程的示例:
y = e^(b0 + b1*x) / (1 + e^(b0 + b1*x))
y = e ^(b0 + b1 * x)/(1 + e ^(b0 + b1 * x))
This formula essentially is a revised version of the logistic function. We take a perfectly linear function b0 + b1*x and plug it into our logistic function to get a mapped output between 0 and 1. (You can say it’s Linear Regression with a twist!)
該公式本質(zhì)上是邏輯函數(shù)的修訂版。 我們采用一個完美的線性函數(shù)b0 + b1 * x并將其插入我們的邏輯函數(shù)中,以獲取0到1之間的映射輸出。
Here b0 and b1 are coefficients that need to be learned from the training data.
這里b0和b1是需要 從訓練數(shù)據(jù)中學到。
Which brings us to two important questions.
這給我們帶來了兩個重要的問題。
我們?nèi)绾未_定y是0還是1? (How do we determine whether our y is 0 or 1?)
The answer is simple : Determine a decision boundary.
答案很簡單:確定決策邊界。
If our decision boundary is 0.5, the values of y which are computed less than 0.5 will be a 0 and the ones greater than 0.5 will be a 1.
如果我們的決策邊界為0.5,則計算出的小于y的y值將為0,大于0.5的y值為1。
如何學習系數(shù)? (How are the coefficients learned?)
While there are several methods to this as covered in ‘Let’s talk Linear Regression’, an important one is the Stochastic Gradient method.
盡管“線性對話回歸”中介紹了多種方法,但重要的一種方法是隨機梯度法。
Given a training example :
給出一個訓練示例:
1. We initialise our coefficients to 0, and calculate our prediction.
1.我們將系數(shù)初始化為0,然后計算我們的預測。
2. We calculate the new coefficient values based on the error in prediction.
2.我們根據(jù)預測誤差計算新系數(shù)值。
3. We repeat till our error drops to a desirable level. (We want our model to be accurate after all)
3.重復執(zhí)行直到錯誤降至理想水平。 (我們畢竟希望我們的模型是準確的)
I will be skimming over the rigorous math in this article to keep it beginner-friendly.
我將在本文中瀏覽嚴格的數(shù)學,以使其對初學者友好。
This article is by no means an exhaustive study of Logistic Regression and is an attempt at simplifying and breaking down the concepts.
本文絕不是對Logistic回歸的詳盡研究,而是試圖簡化和分解這些概念。
For a deeper mathematical understanding, I would highly recommend checking out the Scikit Learn documentation and the Python implementation of the algorithm.
為了更深入地了解數(shù)學,我強烈建議您查看Scikit Learn文檔和該算法的Python實現(xiàn)。
The equation examples and the images in this article have been referred from : https://machinelearningmastery.com/logistic-regression-for-machine-learning/
本文中的方程式示例和圖像已從以下網(wǎng)站引用: https : //machinelearningmastery.com/logistic-regression-for-machine-learning/
翻譯自: https://medium.com/swlh/lets-talk-logistic-regression-4b2072ad7b4e
轉(zhuǎn)正老板讓你談?wù)勀愕目捶?/p>
總結(jié)
以上是生活随笔為你收集整理的转正老板让你谈谈你的看法_让我们谈谈逻辑回归的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 三星、一加等一大堆重磅新机即将到来 这个
- 下一篇: 飞行汽车发展提速,小鹏汇天宣布国内首获特