《吴恩达深度学习》第一课第四周任意层的神经网络实现及BUG处理
生活随笔
收集整理的這篇文章主要介紹了
《吴恩达深度学习》第一课第四周任意层的神经网络实现及BUG处理
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
目錄
- 一、實現
- 1、吳恩達提供的工具函數
- sigmoid
- sigmoid求導
- relu
- relu求導
- 2、實現代碼
- 導包和配置
- 初始化參數
- 前向運算
- 計算損失
- 后向運算
- 更新參數
- 組裝模型
- 3、問題及思考
一、實現
1、吳恩達提供的工具函數
這幾個函數這里只是展示一下,這是吳恩達寫好的工具類,在實現的部分會導入;具體查看提供的附件
sigmoid
def sigmoid(Z):A = 1/(1+np.exp(-Z))cache = Zreturn A, cachesigmoid求導
def sigmoid_backward(dA, cache):Z = caches = 1/(1+np.exp(-Z))dZ = dA * s * (1-s)return dZrelu
def relu(Z):A = np.maximum(0,Z)cache = Z return A, cacherelu求導
def relu_backward(dA, cache):Z = cachedZ = np.array(dA, copy=True)dZ[Z <= 0] = 0return dZ2、實現代碼
導包和配置
import numpy as np import h5py import matplotlib.pyplot as plt from testCases_v2 import * from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward%matplotlib inline plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray'%load_ext autoreload %autoreload 2np.random.seed(1)初始化參數
def initialize_parameters_deep(layer_dims):"""Arguments:layer_dims -- python array (list) containing thedimensions of each layer in our networkReturns:parameters -- python dictionary containing yourparameters "W1", "b1", ..., "WL", "bL":Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])bl -- bias vector of shape (layer_dims[l], 1)"""np.random.seed(3)parameters = {}L = len(layer_dims)for l in range(1, L):parameters['W%d' % l] = np.random.randn(layer_dims[l],layer_dims[l-1]) * 0.01parameters['b%d' % l] = np.zeros((layer_dims[l], 1))return parameters前向運算
def linear_forward(A, W, b):"""Implement the linear part of a layer's forward propagation.Arguments:A -- activations from previous layer (or input data): (size of previous layer, number of examples)W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)b -- bias vector, numpy array of shape (size of the current layer, 1)Returns:Z -- the input of the activation function,also called pre-activation parameter cache -- a python dictionary containing "A", "W" and "b" ;stored for computing the backward pass efficiently"""Z = np.dot(W, A) + bcache = (A, W, b)return Z, cache def linear_activation_forward(A_prev, W, b, activation):"""Implement the forward propagation for the LINEAR->ACTIVATION layerArguments:A_prev -- activations from previous layer(or input data): (size of previous layer, number of examples)W -- weights matrix: numpy array of shape(size of current layer, size of previous layer)b -- bias vector, numpy array of shape (size of the current layer, 1)activation -- the activation to be used in this layer,stored as a text string: "sigmoid" or "relu"Returns:A -- the output of the activation function,also called the post-activation value cache -- a python dictionary containing "linear_cache"and "activation_cache";stored for computing the backward pass efficiently"""if activation == "sigmoid":Z, linear_cache = linear_forward(A_prev, W, b)A, activation_cache = sigmoid(Z)elif activation == "relu":Z, linear_cache = linear_forward(A_prev, W, b)A, activation_cache = relu(Z)cache = (linear_cache, activation_cache)return A, cache def L_model_forward(X, parameters):"""Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computationArguments:X -- data, numpy array of shape (input size, number of examples)parameters -- output of initialize_parameters_deep()Returns:AL -- last post-activation valuecaches -- list of caches containing:every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)the cache of linear_sigmoid_forward() (there is one, indexed L-1)"""caches = []A = XL = len(parameters) // 2 # number of layers in the neural network# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.for l in range(1, L):A_prev = A ### START CODE HERE ### (≈ 2 lines of code)A, linear_activation_cache = linear_activation_forward(A_prev,parameters['W%s' % l], parameters['b%s' % l], activation = "relu")caches.append(linear_activation_cache)### END CODE HERE #### Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.### START CODE HERE ### (≈ 2 lines of code)AL, linear_activation_cache = linear_activation_forward(A,parameters['W%s' % L], parameters['b%s' % L], activation = "sigmoid")caches.append(linear_activation_cache)### END CODE HERE ###return AL, caches計算損失
def compute_cost(AL, Y):m = Y.shape[1]# Compute loss from aL and y.### START CODE HERE ### (≈ 1 lines of code)cost = -1./ m * (np.dot(np.log(AL), Y.T) + np.dot(np.log(1-AL), (1-Y).T))### END CODE HERE ###cost = np.squeeze(cost) return cost后向運算
def linear_backward(dZ, cache):"""Implement the linear portion of backward propagation fora single layer (layer l)Arguments:dZ -- Gradient of the cost with respect to the linear output (of current layer l)cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layerReturns:dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prevdW -- Gradient of the cost with respect to W (current layer l), same shape as Wdb -- Gradient of the cost with respect to b (current layer l), same shape as b"""A_prev, W, b = cachem = A_prev.shape[1]dA_prev = np.dot(W.T, dZ)dW = 1./ m * np.dot(dZ, A_prev.T)db = 1./m * np.sum(dZ, axis=1, keepdims=True)### END CODE HERE ###return dA_prev, dW, db def linear_activation_backward(dA, cache, activation):"""Implement the backward propagation for the LINEAR->ACTIVATION layer.Arguments:dA -- post-activation gradient for current layer l cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficientlyactivation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"Returns:dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prevdW -- Gradient of the cost with respect to W (current layer l), same shape as Wdb -- Gradient of the cost with respect to b (current layer l), same shape as b"""linear_cache, activation_cache = cacheif activation == "relu":### START CODE HERE ### (≈ 2 lines of code)dZ = sigmoid_backward(dA, activation_cache)dA_prev, dW, db = linear_backward(dZ, linear_cache)### END CODE HERE ###elif activation == "sigmoid":### START CODE HERE ### (≈ 2 lines of code)dZ = relu_backward(dA, activation_cache)dA_prev, dW, db = linear_backward(dZ, linear_cache)### END CODE HERE ###return dA_prev, dW, db def L_model_backward(AL, Y, caches):"""Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID groupArguments:AL -- probability vector, output of the forward propagation (L_model_forward())Y -- true "label" vector (containing 0 if non-cat, 1 if cat)caches -- list of caches containing:every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])Returns:grads -- A dictionary with the gradientsgrads["dA" + str(l)] = ...grads["dW" + str(l)] = ...grads["db" + str(l)] = ..."""grads = {}L = len(caches) # the number of layersm = AL.shape[1]Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL# Initializing the backpropagation### START CODE HERE ### (1 line of code)grads['dA'+str(L)] = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))### END CODE HERE ###layer = Lgrads['dA'+str(layer-1)], grads['dW'+str(layer)],grads['db'+str(layer)]= linear_activation_backward(grads['dA'+str(layer)], caches[layer-1],activation = "sigmoid")for l in reversed(range(L - 1)):layer = l + 1grads['dA'+str(layer-1)],grads['dW'+str(layer)],grads['db'+str(layer)] = linear_activation_backward(grads['dA'+str(layer)], caches[layer-1], activation = "relu")return grads更新參數
def update_parameters(parameters, grads, learning_rate):"""Update parameters using gradient descentArguments:parameters -- python dictionary containing your parameters grads -- python dictionary containing your gradients, output of L_model_backwardReturns:parameters -- python dictionary containing your updated parameters parameters["W" + str(l)] = ... parameters["b" + str(l)] = ..."""L = len(parameters) // 2 # number of layers in the neural networkfor l in range(1, L+1):parameters['W'+str(l)] = parameters['W'+str(l)] - learning_rate * grads['dW'+str(l)]parameters['b'+str(l)] = parameters['b'+str(l)] - learning_rate * grads['db'+str(l)]return parameters組裝模型
def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075,num_iterations = 3000, print_cost=False):#lr was 0.009"""Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.Arguments:X -- data, numpy array of shape (number of examples, num_px * num_px * 3)Y -- true "label" vector (containing 0 if cat, 1 if non-cat),of shape (1, number of examples)layers_dims -- list containing the input size and eachlayer size, of length (number of layers + 1).learning_rate -- learning rate of the gradient descent update rulenum_iterations -- number of iterations of the optimization loopprint_cost -- if True, it prints the cost every 100 stepsReturns:parameters -- parameters learnt by the model.They can then be used to predict."""np.random.seed(1)costs = []# Parameters initialization.### START CODE HERE ###parameters = initialize_parameters_deep(layers_dims)### END CODE HERE #### Loop (gradient descent)for i in range(0, num_iterations):# Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.### START CODE HERE ### (≈ 1 line of code)AL, caches = L_model_forward(X, parameters)### END CODE HERE #### Compute cost.### START CODE HERE ### (≈ 1 line of code)cost = compute_cost(AL, Y)### END CODE HERE #### Backward propagation.### START CODE HERE ### (≈ 1 line of code)grads = L_model_backward(AL, Y, caches)### END CODE HERE #### Update parameters.### START CODE HERE ### (≈ 1 line of code)parameters = update_parameters(parameters, grads, learning_rate)### END CODE HERE #### Print the cost every 100 training exampleif print_cost and i % 100 == 0:print ("Cost after iteration %i: %f" %(i, cost))if print_cost and i % 100 == 0:costs.append(cost)# plot the costplt.plot(np.squeeze(costs))plt.ylabel('cost')plt.xlabel('iterations (per tens)')plt.title("Learning rate =" + str(learning_rate))plt.show()return parameters3、問題及思考
除了L層的后向運算的測試用例外,其余各個環節及最后的結果都是正確的。
我的代碼的運行結果和測試用例的對比如下圖所示:
可以看到運行結果是完全對不上的!
所以網上找了很多答案,他們的代碼與我的代碼的不同之處都在L層的反向傳播處:
(1)來源
(2)來源
就這兩個答案來看,他們的寫法在我看來是錯誤的;但是他們能對上答案,而對不上;我改成了它們的樣子也對不上~所以對不上答案的問題可能在于我的測試用例?我也沒有去看他們的測試用例和我的是否一樣!反正就是這一個測試用例過不去,后面的全對。證明我的實現是沒有問題的。
然后要說為什么它們的代碼在我看來是不正確的,我的代碼如下
很明顯,1處的公式得到的就是輸出層(L)的激活值的導數,而2和3每次求導,都應該得到前一層的激活值的導數與當前層的W和b的導數!如下圖所示
總結
以上是生活随笔為你收集整理的《吴恩达深度学习》第一课第四周任意层的神经网络实现及BUG处理的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: python3 中方法各种参数和返回值
- 下一篇: 机器学习基石-作业三-代码部分