如何在代码中将menu隐藏_如何在40行代码中将机器学习用于光学/光子学应用
如何在代碼中將menu隱藏
In the last couple of years, Artificial intelligence is finding its use in all sorts of applications. It can be in medical, health and fitness, education, video calling, sports… you just name it.
在過去的幾年中,人工智能正在各種應用中找到其用途。 它可以用于醫療,健康和健身,教育,視頻通話,體育……等等。
If you are wondering that can you use artificial intelligence techniques in your research areas? But you don’t have much idea how to use it.
如果您想知道是否可以在研究領域中使用人工智能技術? 但是您對如何使用它并不了解。
Then I say- Yes, there is a good chance that you can use it and in this article, I am going to explain how to employ already developed artificial intelligence techniques to the application of your choice within 40 lines of code.
然后我說-是的,您很有可能會使用它,在本文中,我將解釋如何在40行代碼中將已經開發的人工智能技術應用于您選擇的應用程序。
Another thing before diving into the code, if you are thinking that do I need to be an expert in coding to understand this?
在深入研究代碼之前的另一件事,如果您認為我需要成為編碼方面的專家才能理解這一點?
No, you don’t. If you have even basic knowledge of coding or done a little bit of it at the school or maybe at the college level, it should be sufficient to get started.
不,你沒有。 如果您甚至具有編碼方面的基礎知識,或者在學校或大學一級都做了一些編碼工作,那么入門就足夠了。
I will show you how to apply artificial intelligence or specifically machine learning to an optical or photonics application problem. I have chosen an optical application because my background is in optical engineering. Your’s can be different- it can be in chemistry, physics, material science, biology, or any other. The steps which I am going to explain are transferable to all the research areas. Also, I will be coding in python language as this is commonly used coding language for the application of machine learning.
我將向您展示如何將人工智能或專門的機器學習應用于光學或光子學應用問題。 我選擇光學應用是因為我的背景是光學工程。 您的可能會有所不同-化學,物理,材料科學,生物學或任何其他領域都可能不同。 我要解釋的步驟可以轉移到所有研究領域。 另外,我將使用python語言進行編碼,因為這是機器學習應用程序中常用的編碼語言。
There are various kinds of machine learning problem categories: Classification, Regression, Clustering, among others. For more details refer to this link. In this article, I am going to show you an example code for a regression problem.
機器學習問題種類繁多:分類,回歸,聚類等。 有關更多詳細信息,請參考此鏈接 。 在本文中,我將向您展示一個回歸問題的示例代碼。
The First step: for a machine learning application/problem is to have/generate a good, clean dataset. But then there is a good chance that the application that you have in mind doesn’t have the dataset freely available online. So, firstly here I briefly show the Photonics problem I am considering and generate the dataset for it.
第一步:對于機器學習應用程序/問題是擁有/生成一個良好的,干凈的數據集。 但是,您所想到的應用程序很有可能沒有在線免費提供的數據集。 因此,首先,我在這里簡要展示我正在考慮的光子學問題,并為其生成數據集。
考慮的問題 (Problem considered)
Photo by Author作者照片The left circular structure is how the cross-section of a typical hexagonal Photonic Crystal Fiber (PCF) looks like in an optical/photonics problem. Then I need to decide what are the input parameters for this problem. I have shown five input parameters (in green color) but for this article, I am only using 3 of them (wavelength, diameter, pitch) to keep the problem small and simple. For various combinations of input parameters, I obtain desired output nodes quantities (in orange color) and store these in a pcf_data.xlsx file, as shown below.
左圓形結構是典型的六邊形光子晶體光纖(PCF)的橫截面在光學/光子學問題中的樣子。 然后,我需要確定此問題的輸入參數是什么。 我已經顯示了五個輸入參數(綠色),但是對于本文,我僅使用其中三個參數(波長,直徑,間距)來使問題小而簡單。 對于輸入參數的各種組合,我獲得了所需的輸出節點數量(橙色),并將其存儲在pcf_data.xlsx文件中,如下所示。
Photo by Author作者照片Again, I have only considered effective index as output to keep the problem simple. If you want you can have more output nodes as shown in the output layer nodes figure. Also, I have only taken 20 combinations of input parameters. Ideally, it is very less for a typical machine learning problem but for this article to demonstrate the method, it is sufficient.
再次,我只考慮有效索引作為輸出,以使問題保持??簡單。 如果需要,可以有更多輸出節點,如輸出層節點圖所示。 另外,我僅采用了20種輸入參數組合。 理想情況下,對于典型的機器學習問題而言,它要少得多,但是對于本文中演示的方法而言,它就足夠了。
For your case, you first need to figure out the problem, and it’s input and output parameters. Then generate the dataset in the CSV /XLSX format similar to shown above. Some sort of simulator/software or even an experimental/fabrication kit would be fine to generate the data.
對于您的情況,您首先需要弄清楚問題及其輸入和輸出參數。 然后以類似于上圖所示的CSV / XLSX格式生成數據集。 某種模擬器/軟件甚至實驗/制造套件都可以生成數據。
The Second step: is to normalize the generated/collected data. The goal of normalization is to change the values of numeric columns in the dataset to a common scale. Here, we use the MinMaxScalar() function form Scikit-learn to translate each feature individually such that it is in the given range on the training set, e.g. between zero and one. Note: You may need to install, use pip install scikit-learn.
第二步:標準化生成/收集的數據。 標準化的目的是將數據集中的數字列的值更改為通用比例。 在這里,我們使用Scikit-learn的MinMaxScalar()函數來分別轉換每個特征,以使其在訓練集上處于給定范圍內,例如介于零和一之間。 注意:您可能需要安裝,請使用pip install scikit-learn 。
import pandas as pdfrom sklearn.preprocessing import MinMaxScaler# Read the data stored in Excel file using pandas library
df = pd.read_excel('pcf_data.xlsx', sheet_name='Sheet1')# Scale the input data in range (0,1)
scaler = MinMaxScaler(feature_range=(0, 1))
scaler.fit(df)
df_scaler = scaler.transform(df)Photo by Author作者照片
All the input and output values are now scaled between zero and one with minimum and maximum value in every column equated to 0 and 1, respectively. These scaled values will become the inputs to the machine learning model. In the end, we will perform the inverse transform to obtain the original values.
現在,所有輸入和輸出值都在0和1之間縮放,每列中的最小值和最大值分別等于0和1。 這些縮放后的值將成為機器學習模型的輸入。 最后,我們將執行逆變換以獲得原始值。
The Third step: is to define your input and output parameter columns for the code to understand and split the whole dataset in training and test sets. In our case, the first 3 columns are inputs and the last column is output. The train_test_split()function is used to split the dataset. The test dataset extracted will be used to check the accuracy of the model. Here, 10% of data is stored separately as the test dataset.
第三步:定義代碼的輸入和輸出參數列,以理解和拆分訓練和測試集中的整個數據集。 在我們的例子中,前三列是輸入,最后一列是輸出。 train_test_split()函數用于拆分數據集。 提取的測試數據集將用于檢查模型的準確性。 在這里,10%的數據作為測試數據集單獨存儲。
from sklearn.model_selection import train_test_splitnum_inputs = 3num_outputs = 1X = df_scaler[:,range(0, num_inputs)]
y = df_scaler[:,range(num_inputs, num_inputs+num_outputs)]X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.1)
The Fourth Step: is to define the machine learning model. Here, I use MLPRegressor() function developed by Scikit-learn to quickly define various layers and parameters of the machine learning model. Shuffling of data is done so that the model is not biased towards any particular inputs. For more details about the various parameters of MLPRegressor() , check the official website link. The .fit() function trains the model for the specified number of epochs/iterations.
第四步:定義機器學習模型。 在這里,我使用MLPRegressor() Scikit-learn開發的MLPRegressor()函數來快速定義機器學習模型的各個層和參數。 進行數據改組,以使模型不會偏向任何特定輸入。 有關MLPRegressor()各種參數的更多詳細信息,請訪問官方網站鏈接 。 .fit()函數針對指定的紀元/迭代次數訓練模型。
from sklearn.neural_network import MLPRegressorepochs = 1000mlp = MLPRegressor(shuffle=True, random_state=1, max_iter=epochs)
mlp.fit(X_train, y_train)print("Training set score: ", mlp.score(X_train, y_train))
The Fifth step: is to check the prediction on the test set using the already trained model from the previous step. Here, it is required to do the inverse_transform() at the end to obtain the unscaled values. As the test set is randomly generated during the train_test_split() so you need to carefully check and compare the results obtained from the below function with the actual stored values in the test set.
第五步:使用上一步中已經訓練好的模型檢查測試集上的預測。 在這里,需要最后執行inverse_transform()以獲得未縮放的值。 由于測試集是在train_test_split()期間隨機生成的,因此您需要仔細檢查并將以下函數獲得的結果與測試集中的實際存儲值進行比較。
import numpy as npdef prediction(data):# data should already be scaled
pred_output = mlp.predict(data)
final = np.concatenate((data, pred_output.reshape(-1,1)), axis=1)
return scaler.inverse_transform(final)print(prediction(X_test))
Finally: I show how to obtain the output w.r.t to inputs given by the user and not for the test set. Let us say the user wants to predict the output for these inputs: [diaBYpitch, pitch, wavelength] → [0.7, 0.8, 1.8]. To use our machine learning model we take scaled inputs (defined above) with 4 columns in total for this problem. So here I append zero with the user inputs and then do the scaling using scaler.transfor() as we did above. You could append any integer/number in spite of zero and it won’t affect the output as we are not retraining the model. We only need the fourth column to do the scaling using the above defined scaler.
最后:我展示了如何獲取用戶給定輸入而不是測試集輸入的輸出wrt。 假設用戶想要預測這些輸入的輸出:[diaBYpitch,pitch,波長]→[0.7、0.8、1.8]。 要使用我們的機器學習模型,我們針對此問題采用總共4列的縮放輸入(如上定義)。 因此,在這里我將零添加用戶輸入,然后像上面一樣使用scaler.transfor()進行縮放 。 您可以附加任何整數/數字(盡管為零),并且不會影響輸出,因為我們沒有重新訓練模型。 我們只需要第四列使用上面定義的scaler 。
def predict_on_user_input(user_input):user_input = np.append(user_input, 0).reshape(1,-1)
user_input = scaler.transform(user_input)
return prediction(user_input[:,0:num_inputs])output = predict_on_user_input([0.7, 0.8, 1.8])
print('output: ', output)
所有代碼在一起 (All the code together)
The steps described in this article are transferable to any other research area. But of course, you need to figure out the problem you are interested in and maybe collect the dataset by yourself if it is not available online. I hope this article will help you to get started with using machine learning with python even if you have very little coding experience. Cheers!!
本文中描述的步驟可以轉移到任何其他研究領域。 但是,當然,您需要弄清楚自己感興趣的問題,如果無法在線獲取數據集,則可以自己收集。 我希望即使您只有很少的編碼經驗,本文也將幫助您開始使用python進行機器學習。 干杯!!
翻譯自: https://medium.com/@sunnychugh/how-to-use-machine-learning-for-an-optical-photonics-application-in-40-lines-of-code-92cc1c6704f6
如何在代碼中將menu隱藏
總結
以上是生活随笔為你收集整理的如何在代码中将menu隐藏_如何在40行代码中将机器学习用于光学/光子学应用的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 《流浪地球2》票房突破24亿元人民币 斩
- 下一篇: 真我GT Neo5首发240W满级秒充