如何在Python中建立回归模型
數據科學 (DATA SCIENCE)
If you are an aspiring data scientist or a veteran data scientist, this article is for you! In this article, we will be building a simple regression model in Python. To spice things up a bit, we will not be using the widely popular and ubiquitous Boston Housing dataset but instead, we will be using a simple Bioinformatics dataset. Particularly, we will be using the Delaney Solubility dataset that represents an important physicochemical property in computational drug discovery.
如果您是有抱負的數據科學家或經驗豐富的數據科學家,那么本文適合您! 在本文中,我們將在Python中構建一個簡單的回歸模型。 為了使事情更加有趣,我們將不使用廣泛流行且無處不在的Boston Housing數據集,而是將使用簡單的Bioinformatics數據集。 特別是,我們將使用代表計算藥物發現中重要物理化學性質的Delaney溶解度數據集。
The aspiring data scientist will find the step-by-step tutorial particularly accessible while the veteran data scientist may want to find a new challenging dataset for which to try out their state-of-the-art machine learning algorithm or workflow.
有抱負的數據科學家會發現分步教程特別易于訪問,而經驗豐富的數據科學家可能希望找到一個新的具有挑戰性的數據集,以嘗試其最新的機器學習算法或工作流程。
1.我們今天要建設什么? (1. What we are Building Today?)
A regression model! And we are going to use Python to do that. While we’re at it, we are going to use a bioinformatics dataset (technically, it’s cheminformatics dataset) for the model building.
回歸模型! 我們將使用Python來做到這一點。 在此過程中,我們將使用生物信息學數據集(從技術上講,它是化學信息學數據集)進行模型構建。
Particularly, we are going to predict the LogS value which is the aqueous solubility of small molecules. The aqueous solubility value is a relative measure of the ability of a molecule to be soluble in water. It is an important physicochemical property of effective drugs.
特別是,我們將預測LogS值,該值是小分子的水溶性。 水溶性值是分子溶于水的能力的相對量度。 它是有效藥物的重要理化性質。
What better way to get acquainted with the concept of what we are building today than a cartoon illustration!
有比卡通插圖更好的方法來熟悉我們今天正在構建的概念!
Cartoon illustration of the schematic workflow of machine learning model building of the cheminformatics dataset where the target response variable is predicted as a function of input molecular features. Technically, this procedure is known as quantitative structure-activity relationship (QSAR). (Drawn by Chanin Nantasenamat 化學數據集機器學習模型構建的示意性工作流程的卡通插圖,其中目標響應變量根據輸入分子特征而預測。 從技術上講,此過程稱為定量構效關系 (QSAR)。 (由Chanin Nantasenamat繪制2.德萊尼溶解度數據集 (2. Delaney Solubility Dataset)
2.1。 數據理解 (2.1. Data Understanding)
As the name implies, the Delaney solubility dataset is comprised of the aqueous solubility values along with their corresponding chemical structure for a set of 1,144 molecules. For those, outside the field of biology there are some terms that we will spend some time on clarifying.
顧名思義, Delaney溶解度數據集由水溶性溶解度值以及一組1,144個分子的相應化學結構組成。 對于那些在生物學領域之外的人,我們將花費一些時間來澄清它們。
Molecules or sometimes referred to as small molecules or compounds are chemical entities that are made up of atoms. Let’s use some analogy here and let’s think of atoms as being equivalent to Lego blocks where 1 atom being 1 Lego block. When we use several Lego blocks to build something whether it be a house, a car or some abstract entity; such constructed entities are comparable to molecules. Thus, we can refer to the specific arrangement and connectivity of atoms to form a molecule as the chemical structure.
分子或有時稱為小分子或化合物的分子是由原子組成的化學實體。 讓我們在這里使用一些類比,讓我們認為原子等同于樂高積木,其中1個原子等于1個樂高積木。 當我們使用幾個樂高積木來建造東西時,無論是房屋,汽車還是抽象物體。 這樣構造的實體可與分子相比。 因此,我們可以將形成分子的原子的特定排列和連通性稱為化學結構 。
Analogy of the construction of molecules to Lego blocks. This yellow house is from Lego 10703 Creative Builder Box. (Drawn by Chanin Nantasenamat)類似于樂高積木的分子構造。 這個黃色的房子來自Lego 10703 Creative Builder Box。 (由Chanin Nantasenamat繪制)So how does each of the entities that you are building differ? Well, they differ by the spatial connectivity of the blocks (i.e. how the individual blocks are connected). In chemical terms, each molecules differ by their chemical structures. Thus, if you alter the connectivity of the blocks, consequently you would have effectively altered the entity that you are building. For molecules, if atom types (e.g. carbon, oxygen, nitrogen, sulfur, phosphorus, fluorine, chlorine, etc.) or groups of atoms (e.g. hydroxy, methoxy, carboxy, ether, etc.) are altered then the molecules would also be altered consequently becoming a new chemical entity (i.e. that is a new molecule is produced).
那么,您要構建的每個實體有何不同? 好吧,它們的區別在于塊的空間連通性(即各個塊的連接方式)。 用化學術語來說,每個分子的化學結構都不同。 因此,如果您更改塊的連接性,則將有效地更改您正在構建的實體。 對于分子,如果原子類型(例如碳,氧,氮,硫,磷,氟,氯等)或原子團(例如羥基,甲氧基,羧基,醚等)發生改變,則分子也將被改變改變從而成為新的化學實體(即產生了新的分子)。
Cartoon illustration of a molecular model. Red, blue, dark gray and white represents oxygen, nitrogen, carbon and hydrogen atoms while the light gray connecting the atoms are the bonds. Each atoms can be comparable to a Lego block. The constructed molecule shown above is comparable to a constructed Lego entity (such as the yellow house shown above in this article). (Drawn by Chanin Nantasenamat)一個分子模型的動畫片例證。 紅色,藍色,深灰色和白色表示氧,氮,碳和氫原子,而連接原子的淺灰色是鍵。 每個原子都可以相當于一個樂高積木。 上面顯示的構建分子與構建的Lego實體(例如本文上面顯示的黃色房屋)相當。 (由Chanin Nantasenamat繪制)To become an effective drug, molecules will need to be uptake and distributed in the human body and such property is directly governed by the aqueous solubility. Solubility is an important property that researchers take into consideration in the design and development of therapeutic drugs. Thus, a potent drug that is unable to reach the desired destination target owing to its poor solubility would be a poor drug candidate.
為了成為有效的藥物,分子將需要被吸收并分布在人體中,并且這種性質直接受水溶性的支配 。 溶解度是研究人員在設計和開發治療藥物時要考慮的重要屬性。 因此,由于溶解度差而無法達到所需目標靶點的有效藥物將是較差的藥物候選物。
2.2。 檢索數據集 (2.2. Retrieving the Dataset)
The aqueous solubility dataset as performed by Delaney in the research paper entitled ESOL: Estimating Aqueous Solubility Directly from Molecular Structure is available as a Supplementary file. For your convenience, we have also downloaded the entire Delaney solubility dataset and made it available on the Data Professor GitHub.
Delaney在題為ESOL:直接從分子結構直接估算水溶性的研究論文中提供的水溶性數據集可作為補充文件使用 。 為了方便起見,我們還下載了整個Delaney溶解度數據集,并在Data Professor GitHub上提供了該數據 集 。
Preview of the raw version of the Delaney solubility dataset. The Delaney溶解度數據集的原始版本的預覽。 full version is available on the 完整版本可在Data Professor GitHub.Data Professor GitHub上獲得 。CODE PRACTICE
守則實務
Let’s get started, shall we?
讓我們開始吧,好嗎?
Fire up Google Colab or your Jupyter Notebook and run the following code cells.
啟動Google Colab或Jupyter Notebook,然后運行以下代碼單元。
CODE EXPLANATION
代碼說明
Let’s now go over what each code cells mean.
現在讓我們看一下每個代碼單元的含義。
The first code cell,
第一個代碼單元 ,
As the code literally says, we are going to import the pandas library as pd.
就像代碼所說的那樣,我們將把pandas庫導入為pd 。
The second code cell:
第二個代碼單元 :
Assigns the URL where the Delaney solubility dataset resides to the delaney_url variable.
將Delaney溶解度數據集所在的URL分配給delaney_url變量。
Reads in the Delaney solubility dataset via the pd.read_csv() function and assigns the resulting dataframe to the delaney_df variable.
通過pd.read_csv()函數讀取Delaney溶解度數據集,并將結果數據幀分配給delaney_df變量。
Calls the delaney_df variable to return the output value that essentially prints out a dataframe containing the following 4 columns:
調用delaney_df變量以返回輸出值,該輸出值實質上打印出包含以下4列的數據delaney_df :
Compound ID — Names of the compounds.
化合物ID-化合物的名稱。
measured log(solubility:mol/L) — The experimental aqueous solubility values as reported in the original research article by Delaney.
測得的log(溶解度:mol / L) -實驗水溶解度值??,由Delaney在原始研究文章中報道。
ESOL predicted log(solubility:mol/L) — Predicted aqueous solubility values as reported in the original research article by Delaney.
ESOL預測的log(溶解度:mol / L) -預測的水溶解度值??,由Delaney在原始研究文章中報告。
SMILES — A 1-dimensional encoding of the chemical structure information
SMILES —化學結構信息的一維編碼
2.3。 計算分子描述符 (2.3. Calculating the Molecular Descriptors)
A point it note is that the above dataset as originally provided by the authors is not yet useable out of the box. Particularly, we will have to use the SMILES notation to calculate the molecular descriptors via the rdkit Python library as demonstrated in a step-by-step manner in a previous Medium article (How to Use Machine Learning for Drug Discovery).
需要注意的一點是,上述由作者最初提供的數據集尚無法立即使用。 特別是,我們將不得不使用SMILES表示法來通過rdkit Python庫計算分子描述符 ,如先前的中篇文章( 如何使用機器學習進行藥物發現 )中逐步說明的那樣。
It should be noted that the SMILES notation is a one-dimensional depiction of the chemical structure information of the molecules. Molecular descriptors are quantitative or qualitative description of the unique physicochemical properties of molecules.
應該注意的是, SMILES符號是分子化學結構信息的一維描述。 分子描述符是分子獨特物理化學性質的定量或定性描述。
Let’s think of molecular descriptors as a way to uniquely represent the molecules in numerical form that can be understood by machine learning algorithms to learn from, make predictions and provide useful knowledge on the structure-activity relationship. As previously noted, the specific arrangement and connectivity of atoms produce different chemical structures that consequently dictates the resulting activity that they will produce. Such notion is known as structure-activity relationship.
讓我們將分子描??述符視為以數字形式唯一表示分子的一種方法,機器學習算法可以理解該分子以學習,進行預測并提供有關結構-活性關系的有用知識。 如前所述,原子的特定排列和連通性會產生不同的化學結構,從而決定它們將產生的最終活性。 這種概念被稱為結構-活性關系。
The processed version of the dataset containing the calculated molecular descriptors along with their corresponding response variable (logS) is shown below. This processed dataset is now ready to be used for machine learning model building whereby the first 4 variables can be used as the X variables and the logS variables can be used as the Y variable.
包含計算的分子描述符及其相應的響應變量(logS)的數據集的處理版本如下所示。 現在已準備好將此處理后的數據集用于機器學習模型的構建,其中前四個變量可以用作X變量,而logS變量可以用作Y變量。
Preview of the processed version of the Delaney solubility dataset. Essentially, the SMILES notation from the raw version was used as input to compute the 4 molecular descriptors as described in detail in a previous Delaney溶解度數據集處理版本的預覽。 本質上,原始版本的SMILES表示法用作輸入來計算4個分子描述符,如先前的上Medium article and 一篇中型文章和YouTube video. The YouTube視頻中詳細描述的那樣。 full version is available on the 完整版本可在Data Professor GitHub.Data Professor GitHub上獲得 。A quick description of the 4 molecular descriptors and response variable is provided below:
下面提供了4種分子描述符和響應變量的快速描述:
cLogP — Octanol-water partition coefficient
cLogP —辛醇-水分配系數
MW — Molecular weight
MW —分子量
RB —Number of rotatable bonds
可旋轉鍵RB -Number
AP—Aromatic proportion = number of aromatic atoms / total number of heavy atoms
AP —芳香比例=芳香原子數/重原子總數
LogS — Log of the aqueous solubility
LogS —水溶性的對數
CODE PRACTICELet’s continue by reading in the CSV file that contains the calculated molecular descriptors.
代碼實踐讓我們繼續閱讀包含計算出的分子描述符的CSV文件。
CODE EXPLANATION
代碼說明
Let’s now go over what the code cells mean.
現在讓我們來看一下代碼單元的含義。
Assigns the URL where the Delaney solubility dataset (with calculated descriptors) resides to the delaney_url variable.
將Delaney溶解度數據集(具有計算的描述符)所在的URL分配給delaney_url變量。
Reads in the Delaney solubility dataset (with calculated descriptors) via the pd.read_csv() function and assigns the resulting dataframe to the delaney_descriptors_df variable.
通過pd.read_csv()函數讀取Delaney溶解度數據集(具有計算的描述符),并將結果數據幀分配給delaney_descriptors_df變量。
Calls the delaney_descriptors_df variable to return the output value that essentially prints out a dataframe containing the following 5 columns:
調用delaney_descriptors_df變量以返回輸出值,該輸出值實質上打印出包含以下5列的數據delaney_descriptors_df :
The first 4 columns are molecular descriptors computed using the rdkit Python library. The fifth column is the response variable logS.
前4列是使用rdkit Python庫計算的分子描述符。 第五列是響應變量logS 。
3.數據準備 (3. Data Preparation)
3.1。 將數據分離為X和Y變量 (3.1. Separating the data as X and Y variables)
In building a machine learning model using the scikit-learn library, we would need to separate the dataset into the input features (the X variables) and the target response variable (the Y variable).
在使用scikit-learn庫構建機器學習模型時,我們需要將數據集分為輸入要素( X變量)和目標響應變量( Y變量)。
CODE PRACTICE
守則實務
Follow along and implement the following 2 code cells to separate the dataset contained with the delaney_descriptors_df dataframe to X and Y subsets.
遵循并實現以下2個代碼單元,以將delaney_descriptors_df數據幀中包含的數據集分離為X和Y子集。
CODE EXPLANATION
代碼說明
Let’s take a look at the 2 code cells.
讓我們看一下這兩個代碼單元。
First code cell:
第一個代碼單元:
Here we are using the drop() function to specifically ‘drop’ the logS variable (which is the Y variable and we will be dealing with it in the next code cell). As a result, we will have 4 remaining variables which are assigned to the X dataframe. Particularly, we apply the drop() function to the delaney_descriptors_df dataframe as in delaney_descriptors_df.drop(‘logS’, axis=1) where the first input argument is the specific column that we want to drop and the second input argument of axis=1 specifies that the first input argument is a column.
在這里,我們使用drop()函數專門“刪除” logS變量(它是Y變量,我們將在下一個代碼單元中處理它)。 結果,我們將有4個剩余變量被分配給X數據幀。 特別是,我們將drop()函數應用于delaney_descriptors_df數據幀,如delaney_descriptors_df.drop('logS', axis=1) ,其中第一個輸入參數是我們要刪除的特定列,第二個輸入參數是axis=1指定第一個輸入參數是一列。
Second code cell:
第二個代碼單元:
Here we select a single column (the ‘logS’ column) from the delaney_descriptors_df dataframe via delaney_descriptors_df.logS and assigning this to the Y variable.
在這里,我們通過delaney_descriptors_df.logS從delaney_descriptors_df數據delaney_descriptors_df.logS選擇單個列(“ logS”列),并將其分配給Y變量。
3.2。 數據分割 (3.2. Data splitting)
In evaluating the model performance, the standard practice is to split the dataset into 2 (or more partitions) partitions and here we will be using the 80/20 split ratio whereby the 80% subset will be used as the train set and the 20% subset the test set. As scikit-learn requires that the data be further separated to their X and Y components, the train_test_split() function can readily perform the above-mentioned task.
在評估模型性能時,標準做法是將數據集分為2個(或更多分區)分區,這里我們將使用80/20的拆分比率,其中80%的子集將用作訓練集,而20%子集測試集。 由于scikit-learn需要將數據進一步分離為其X和Y分量,所以train_test_split()函數可以輕松地執行上述任務。
CODE PRACTICE
守則實務
Let’s implement the following 2 code cells.
讓我們實現以下2個代碼單元。
CODE EXPLANATION
代碼說明
Let’s take a look at what the code is doing.
讓我們看一下代碼在做什么。
First code cell:
第一個代碼單元:
Here we will be importing the train_test_split from thescikit-learn library.
在這里,我們將從thescikit-learn庫中導入train_test_split 。
Second code cell:
第二個代碼單元:
We start by defining the names of the 4 variables that the train_test_split() function will generate and this includes X_train, X_test, Y_train and Y_test. The first 2 corresponds to the X dataframes for the train and test sets while the last 2 corresponds to the Y variables for the train and test sets.
我們首先定義train_test_split()函數將生成的4個變量的名稱,其中包括X_train , X_test , Y_train和Y_test 。 前2個對應于火車和測試集的X個數據幀,而后2個對應于火車和測試集的Y個變量。
4.線性回歸模型 (4. Linear Regression Model)
Now, comes the fun part and let’s build a regression model.
現在,有趣的部分來了,讓我們建立一個回歸模型。
4.1。 訓練線性回歸模型 (4.1. Training a linear regression model)
CODE PRACTICE
守則實務
Here, we will be using the LinearRegression() function from scikit-learn to build a model using the ordinary least squares linear regression.
在這里,我們將使用scikit-learn的LinearRegression()函數使用普通的最小二乘線性回歸來構建模型。
CODE EXPLANATION
代碼說明
Let’s see what the codes are doing
讓我們看看代碼在做什么
First code cell:
第一個代碼單元:
- Here we import the linear_model from the scikit-learn library 在這里,我們從scikit-learn庫中導入linear_model
Second code cell:
第二個代碼單元:
We assign the linear_model.LinearRegression() function to the model variable.
我們將linear_model.LinearRegression()函數分配給model變量。
A model is built using the command model.fit(X_train, Y_train) whereby the model.fit() function will take X_train and Y_train as input arguments to build or train a model. Particularly, the X_train contains the input features while the Y_train contains the response variable (logS).
使用命令model.fit(X_train, Y_train)構建模型model.fit(X_train, Y_train)其中model.fit()函數將X_train和Y_train作為輸入參數來構建或訓練模型。 特別是, X_train包含輸入X_train ,而Y_train包含響應變量(logS)。
4.2。 應用訓練好的模型來預測訓練和測試集中的logS (4.2. Apply trained model to predict logS from the training and test set)
As mentioned above, model.fit() trains the model and the resulting trained model is saved into the model variable.
如上所述, model.fit()對模型進行訓練,并將得到的訓練后的模型保存到model變量中。
CODE PRACTICE
守則實務
We will now apply the trained model to make predictions on the training set (X_train).
現在,我們將應用訓練后的模型對訓練集( X_train )進行預測。
We will now apply the trained model to make predictions on the test set (X_test).
現在,我們將應用經過訓練的模型對測試集( X_test )進行預測。
CODE EXPLANATION
代碼說明
Let’s proceed to the explanation.
讓我們繼續進行說明。
The following explanation will cover only the training set (X_train) as the exact same concept can be identically applied to the test set (X_test) by performing the following simple tweaks:
以下解釋將僅涵蓋訓練集( X_train ),因為可以通過執行以下簡單的調整將完全相同的概念等同地應用于測試集( X_test ):
Replace X_train by X_test
用X_train替換X_test
Replace Y_train by Y_test
將Y_train替換為Y_test
Replace Y_pred_train by Y_pred_test
將Y_pred_train替換為Y_pred_test
Everything else are exactly the same.
其他所有內容都完全相同。
First code cell:
第一個代碼單元:
Predictions of the logS values will be performed by calling the model.predict() and using X_train as the input argument such that we run the command model.predict(X_train). The resulting predicted values will be assigned to the Y_pred_train variable.
通過調用model.predict()并使用X_train作為輸入參數來執行logS值的預測,以便我們運行命令model.predict(X_train) 。 結果預測值將分配給Y_pred_train變量。
Second code cell:
第二個代碼單元:
Model performance metrics are now printed.
現在將顯示模型性能指標。
Regression coefficient values are obtained from model.coef_,
回歸系數值是從model.coef_獲得的,
The y-intercept value is obtained from model.intercept_,
y截距值是從model.intercept_獲得的,
The mean squared error (MSE) is computed using the mean_squared_error() function using Y_train and Y_pred_train as input arguments such that we run mean_squared_error(Y_train, Y_pred_train)
使用mean_squared_error()函數并使用Y_train和Y_pred_train作為輸入參數來計算均方誤差(MSE),以便我們運行mean_squared_error(Y_train, Y_pred_train)
The coefficient of determination (also known as R2) is computed using the r2_score() function using Y_train and Y_pred_train as input arguments such that we run r2_score(Y_train, Y_pred_train)
確定系數(也稱為R2)是使用r2_score()函數使用Y_train和Y_pred_train作為輸入參數來計算的,因此我們可以運行r2_score(Y_train, Y_pred_train)
4.3。 打印出回歸方程 (4.3. Printing out the Regression Equation)
The equation of a linear regression model is actually the model itself whereby you can plug in the input feature values and the equation will return the target response values (LogS).
線性回歸模型的方程實際上是模型本身,您可以在其中插入輸入要素值,該方程將返回目標響應值(LogS)。
CODE PRACTICE
守則實務
Let’s now print out the regression model equation.
現在讓我們打印出回歸模型方程式。
CODE EXPLANATION
代碼說明
First code cell:
第一個代碼單元:
All the components of the regression model equation is derived from the model variable. The y-intercept and the regression coefficients for LogP, MW, RB and AP are provided in model.intercept_, model.coef_[0], model.coef_[1], model.coef_[2] and model.coef_[3].
回歸模型方程式的所有組成部分均來自model變量。 在model.intercept_ , model.coef_[0] , model.coef_[1] , model.coef_[2]和model.coef_[3]中提供了model.intercept_ ,MW,RB和AP的y截距和回歸系數。 。
Second code cell:
第二個代碼單元:
Here we put together the components and print out the equation via the print() function.
在這里,我們將各個組件放在一起,然后通過print()函數打印出方程式。
5.實驗與預測LogS的散點圖 (5. Scatter Plot of experimental vs. predicted LogS)
We will now visualize the relative distribution of the experimental versus predicted LogS by means of a scatter plot. Such plot will allow us to quickly see the model performance.
現在,我們將通過散點圖可視化實驗與預測LogS的相對分布。 這樣的繪圖將使我們能夠快速查看模型性能。
CODE PRACTICE
守則實務
In the forthcoming examples, I will show you how to layout the 2 sub-plots differently namely: (1) vertical plot and (2) horizontal plot.
在接下來的示例中,我將向您展示如何以不同的方式布局兩個子圖:(1)垂直圖和(2)水平圖。
CODE EXPLANATION
代碼說明
Let’s now take a look at the underlying code for implementing the vertical and horizontal plots. Here, I provide 2 options for you to choose from whether to have the layout of this multi-plot figure in the vertical or horizontal layout.
現在讓我們看一下實現垂直和水平繪圖的基礎代碼。 在這里,我提供2個選項供您選擇,以垂直或水平布局顯示此多圖圖形的布局。
Import libraries
導入庫
Both start by importing the necessary libraries namely matplotlib and numpy. Particularly, most of the code will be using matplotlib for creating the plot while the numpy library is used here to add a trend line.
兩者都從導入必要的庫matplotlib和numpy 。 特別是,大多數代碼將使用matplotlib創建圖,而此處使用numpy庫添加趨勢線。
Define figure size
定義圖形尺寸
Next, we specify the figure dimensions (what will be the width and height of the figure) via plt.figure(figsize=(5,11)) for the vertical plot and plt.figure(figsize=(11,5)) for the horizontal plot. Particularly, (5,11) tells matplotlib that the figure for the vertical plot should be 5 inches wide and 11 inches tall while the inverse is used for the horizontal plot.
接下來,我們通過plt.figure(figsize=(5,11))為垂直圖指定圖形尺寸(圖形的寬度和高度plt.figure(figsize=(5,11)) ,并為以下圖形plt.figure(figsize=(11,5))水平圖。 特別是,(5,11)告訴matplotlib,垂直圖的圖形應為5英寸寬,11英寸高,而水平圖應使用反圖。
Define placeholders for the sub-plots
定義子圖的占位符
We will tell matplotlib that we want to have 2 rows and 1 column and thus its layout will be that of a vertical plot. This is specified by plt.subplot(2, 1, 1) where input arguments of 2, 1, 1 refers to 2 rows, 1 column and the particular sub-plot that we are creating underneath it. In other words, let’s think of the use of plt.subplot() function as a way of structuring the plot by creating placeholders for the various sub-plots that the figure contains. The second sub-plot of the vertical plot is specified by the value of 2 in the third input argument of the plt.subplot() function as in plt.subplot(2, 1, 2).
我們將告訴matplotlib我們想要2行1列,因此其布局應為垂直圖。 這是通過指定plt.subplot(2, 1, 1)其中的輸入參數2, 1, 1指的是2行,第1列和所述特定子情節我們正在創建它的下方。 換句話說,讓我們考慮使用plt.subplot()函數,通過為圖形所包含的各個子圖創建占位符來構造圖的方式。 垂直圖的第二個子圖由plt.subplot()函數的第三個輸入參數中的值2指定,如plt.subplot(2, 1, 2) 。
By applying the same concept, the structure of the horizontal plot is created to have 1 row and 2 columns via plt.subplot(1, 2, 1) and plt.subplot(1, 2, 2) that houses the 2 sub-plots.
通過應用相同的概念,通過容納2個子圖的plt.subplot(1, 2, 2) plt.subplot(1, 2, 1)和plt.subplot(1, 2, 2) plt.subplot(1, 2, 1)將水平圖的結構創建為具有1行和2列。
Creating the scatter plot
創建散點圖
Now that the general structure of the figure is in place, let’s now add the data visualizations. The data scatters are added using the plt.scatter() function as in plt.scatter(x=Y_train, y=Y_pred_train, c=”#7CAE00", alpha=0.3) where x refers to the data column to use for the x axis, y refers to the data column to use for the y axis, c refers to the color to use for the scattered data points and alpha refers to the alpha transparency level (how translucent the scattered data points should be, the lower the number the more transparent it becomes), respectively.
現在已經有了圖形的一般結構,現在讓我們添加數據可視化。 像使用plt.scatter(x=Y_train, y=Y_pred_train, c=”#7CAE00", alpha=0.3)一樣,使用plt.scatter()函數添加數據分散plt.scatter(x=Y_train, y=Y_pred_train, c=”#7CAE00", alpha=0.3)其中x用于x的數據列軸, y要用于y軸的數據列, c要用于散亂數據點的顏色, alpha表示alpha透明度級別(散亂數據點應具有的半透明性,數字越低變得更加透明)。
Adding the trend line
添加趨勢線
Next, we use the np.polyfit() and np.poly1d() functions from numpy together with the plt.plot () function from matplotlib to create the trend line.
接下來,我們使用numpy的np.polyfit()和np.poly1d()函數以及matplotlib的plt.plot ()函數來創建趨勢線。
# Add trendline# https://stackoverflow.com/questions/26447191/how-to-add-trendline-in-python-matplotlib-dot-scatter-graphsz = np.polyfit(Y_train, Y_pred_train, 1)
p = np.poly1d(z)
plt.plot(Y_test,p(Y_test),"#F8766D")
Adding the x and y axes labels
添加x和y軸標簽
To add labels for the x and y axes, we use the plt.xlabel() and plt.ylabel() functions. It should be noticed that for the vertical plot, we omit the x axis label for the top sub-plot (Why? Because it is redundant with the x-axis label for the bottom sub-plot).
要為x和y軸添加標簽,我們使用plt.xlabel()和plt.ylabel()函數。 應當注意,對于垂直圖,我們省略了頂部子圖的x軸標簽( 為什么?因為它與底部子圖的x軸標簽是多余的 )。
Saving the figure
保存身材
Finally, we are going to save the constructed figure to file and we can do that using the plt.savefig() function from matplotlib and specifying the file name as the input argument. Lastly, finish off with plt.show().
最后,我們將把構造plt.savefig()圖形保存到文件中,我們可以使用matplotlib的plt.savefig()函數并指定文件名作為輸入參數來完成此操作。 最后,以plt.show() 。
plt.savefig('plot_vertical_logS.png')plt.savefig('plot_vertical_logS.pdf')
plt.show()
VISUAL EXPLANATION
視覺說明
The above section provides a text-based explanation and in this section we are going to do the same with this visual explanation that makes use of color highlights to distinguish the different components of the plot.
上一節提供了基于文本的解釋,在本節中,我們將使用視覺突出顯示來做同樣的事情,該視覺解釋使用顏色突出顯示來區分繪圖的不同組成部分。
Visual explanation on creating a scatter plot. Here we color highlight the specific lines of code and their corresponding plot component. (Drawn by Chanin Nantasenamat)關于創建散點圖的直觀說明。 在這里,我們用彩色突出顯示特定的代碼行及其對應的繪圖組件。 (由Chanin Nantasenamat繪制)需要您的反饋 (Need Your Feedback)
As an educator, I love to hear how I can improve my contents. Please let me know in the comments whether:
作為一名教育工作者,我喜歡聽聽如何改善自己的內容。 請在評論中讓我知道是否:
關于我 (About Me)
I work full-time as an Associate Professor of Bioinformatics and Head of Data Mining and Biomedical Informatics at a Research University in Thailand. In my after work hours, I’m a YouTuber (AKA the Data Professor) making online videos about data science. In all tutorial videos that I make, I also share Jupyter notebooks on GitHub (Data Professor GitHub page).
我是泰國研究大學的生物信息學副教授兼數據挖掘和生物醫學信息學負責人,全職工作。 在下班后,我是YouTuber(又名數據教授 ),負責制作有關數據科學的在線視頻。 在我制作的所有教程視頻中,我也在GitHub上共享Jupyter筆記本( 數據教授GitHub頁面 )。
在社交網絡上與我聯系 (Connect with Me on Social Network)
? YouTube: http://youtube.com/dataprofessor/? Website: http://dataprofessor.org/ (Under construction)? LinkedIn: https://www.linkedin.com/company/dataprofessor/? Twitter: https://twitter.com/thedataprof? FaceBook: http://facebook.com/dataprofessor/? GitHub: https://github.com/dataprofessor/? Instagram: https://www.instagram.com/data.professor/
?的YouTube: http://youtube.com/dataprofessor/ ?網站: http://dataprofessor.org/ (在建)?LinkedIn: https://www.linkedin.com/company/dataprofessor/ ?的Twitter: HTTPS: //twitter.com/thedataprof ?Facebook的: http://facebook.com/dataprofessor/ ?GitHub的: https://github.com/dataprofessor/ ?Instagram: https://www.instagram.com/data.professor/
翻譯自: https://towardsdatascience.com/how-to-build-a-regression-model-in-python-9a10685c7f09
總結
以上是生活随笔為你收集整理的如何在Python中建立回归模型的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 苹果xs max值得买吗
- 下一篇: 循环神经网络 递归神经网络_了解递归神经