fastai学习:05_pet_breeds Questionnaire
1.Why do we first resize to a large size on the CPU, and then to a smaller size on the GPU?
首先,在訓練模型時,我們希望能夠將圖片的尺寸統一,整理為張量,傳入GPU,我們還希望最大限度地減少執行不同增強計算的數量。其次,在進行圖片尺寸調整時,常見的數據增強方法可能會引起引入虛假空白數據,使數據降維的問題。
2.If you are not familiar with regular expressions, find a regular expression tutorial, and some problem sets, and complete them. Have a look on the book’s website for suggestions.
正則表達式
3.What are the two ways in which data is most commonly provided, for most deep learning datasets?
Data is usually provided in one of these two ways:
Individual files representing items of data, such as text documents or images, possibly organized into folders or with filenames representing information about those items
A table of data, such as in CSV format, where each row is an item which may include filenames providing a connection between the data in the table and data in other formats, such as text documents and images
4.Look up the documentation for L and try using a few of the new methods that it adds.
??L,確實有很多功能
5.Look up the documentation for the Python pathlib module and try using a few methods of the Path class.
https://docs.python.org/zh-cn/3/library/pathlib.html
6.Give two examples of ways that image transformations can degrade the quality of the data.
(1)旋轉會帶來空白
(2)resize需要插值
7.What method does fastai provide to view the data in a DataLoaders?
DataLoaders.show_batch
8.What method does fastai provide to help you debug a DataBlock?
DataLoaders.summary
If you made a mistake while building your DataBlock, it is very likely you won’t see it before this step. To debug this, we encourage you to use the summary method. It will attempt to create a batch from the source you give it, with a lot of details. Also, if it fails, you will see exactly at which point the error happens, and the library will try to give you some help. For instance, one common mistake is to forget to use a Resize transform, so you end up with pictures of different sizes and are not able to batch them. Here is what the summary would look like in that case (note that the exact text may have changed since the time of writing, but it will give you an idea):
9.Should you hold off on training a model until you have thoroughly cleaned your data?
No.
10.What are the two pieces that are combined into cross-entropy loss in PyTorch?
A combination of a Softmax function and Negative Log Likelihood Loss.
11.What are the two properties of activations that softmax ensures? Why is this important?
In our classification model, we use the softmax activation function in the final layer to ensure that the activations are all between 0 and 1, and that they sum to 1.
12.When might you want your activations to not have these two properties?
Multi-label classification problem.
13.Calculate the exp and softmax columns of <> yourself (i.e., in a spreadsheet, with a calculator, or in a notebook).
簡單計算
14.Why can’t we use torch.where to create a loss function for datasets where our label can have more than two categories?
torch.where只能用于二分類
15.What is the value of log(-2)? Why?
沒有意義
16.What are two good rules of thumb for picking a learning rate from the learning rate finder?
(1)最小值除以10,Minimum/10
(2)斜率最大值,steepest point
17.What two steps does the fine_tune method do?
When we call the fine_tune method fastai does two things:
Trains the randomly added layers for one epoch, with all other layers frozen
Unfreezes all of the layers, and trains them all for the number of epochs requested
18.In Jupyter Notebook, how do you get the source code for a method or function?
??
19.What are discriminative learning rates?
對模型不同層,采用不同的學習率,較早的層高,較晚的層低。
20.How is a Python slice object interpreted when passed as a learning rate to fastai?
The first value passed will be the learning rate in the earliest layer of the neural network, and the second value will be the learning rate in the final layer. The layers in between will have learning rates that are multiplicatively equidistant throughout that range.
21.Why is early stopping a poor choice when using 1cycle training?
Before the days of 1cycle training it was very common to save the model at the end of each epoch, and then select whichever model had the best accuracy out of all of the models saved in each epoch. This is known as early stopping. However, this is very unlikely to give you the best answer, because those epochs in the middle occur before the learning rate has had a chance to reach the small values, where it can really find the best result. Therefore, if you find that you have overfit, what you should actually do is retrain your model from scratch, and this time select a total number of epochs based on where your previous best results were found.
22.What is the difference between resnet50 and resnet101?
resnet101規模更大。
23.What does to_fp16 do?
One technique that can speed things up a lot is mixed-precision training. This refers to using less-precise numbers (half-precision floating point, also called fp16) where possible during training.
使用更低的精度的數據進行訓練。
總結
以上是生活随笔為你收集整理的fastai学习:05_pet_breeds Questionnaire的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 计算机图形学基础教程论文,计算机图形学小
- 下一篇: php ajax复选框是否选中的值,jq