cs231n-assignment2的笔记
重度拖延癥患者準(zhǔn)備繼續(xù)完成作業(yè)2了......
首先題目鏈接點(diǎn)擊打開(kāi)鏈接
Q1: Fully-connected Neural Network (25 points)
在這一問(wèn)中需要完成layers.py文件中的一些函數(shù),首先是affine_forward,也就是前向計(jì)算,較為簡(jiǎn)單:
N = x.shape[0] x_reshape = x.reshape([N, -1]) x_plus_w = x_reshape.dot(w) # [N, M] out = x_plus_w + b然后是反向梯度傳遞計(jì)算,這里的一個(gè)小技巧就是先確定導(dǎo)數(shù)的表達(dá)式,然后具體計(jì)算(比如是否需要轉(zhuǎn)置、求和等)則可以通過(guò)導(dǎo)數(shù)表達(dá)式中各項(xiàng)的shape確定,這個(gè)技巧在lecture4的backprop notes中Gradients for vectorized operations一節(jié)中有講到。 點(diǎn)擊打開(kāi)鏈接 N = x.shape[0] x_reshape = x.reshape([N, -1]) # [N, D] dx = dout.dot(w.T).reshape(*x.shape) dw = (x_reshape.T).dot(dout) db = np.sum(dout, axis=0)例如對(duì)于dx,通過(guò)上面前向計(jì)算表達(dá)式可知其導(dǎo)數(shù)為dout * w,由于dout.shape == (N, M),w.shape == (D, M),而dx.shape==(N, d1, ..., d_k),很容易寫(xiě)出表達(dá)式。
對(duì)于relu_forward,直接根據(jù)其表達(dá)式即可:
out = np.maximum(0, x)而relu_backward中,則與點(diǎn)擊打開(kāi)鏈接?中的max gate是一致的,代碼如下:
dx = dout * (x >= 0)接下來(lái)看看它在這次作業(yè)中預(yù)先實(shí)現(xiàn)好的svm loss和softmax loss。實(shí)現(xiàn)Two-layer network, 這里較為簡(jiǎn)單,根據(jù)初始化要求,對(duì)w和b進(jìn)行初始化:
self.params['W1'] = weight_scale * np.random.randn(input_dim, hidden_dim) self.params['b1'] = np.zeros(hidden_dim) self.params['W2'] = weight_scale * np.random.randn(hidden_dim, num_classes) self.params['b2'] = np.zeros(num_classes)然后在loss函數(shù)中,搭建2層神經(jīng)網(wǎng)絡(luò):
layer1_out, layer1_cache = affine_relu_forward(X, self.params['W1'], self.params['b1']) layer2_out, layer2_cache = affine_forward(layer1_out, self.params['W2'], self.params['b2']) scores = layer2_out計(jì)算梯度:
loss, dscores = softmax_loss(scores, y) loss = loss + 0.5 * self.reg * np.sum(self.params['W1'] * self.params['W1']) + \0.5 * self.reg * np.sum(self.params['W2'] * self.params['W2']) d1_out, dw2, db2 = affine_backward(dscores, layer2_cache) grads['W2'] = dw2 + self.reg * self.params['W2'] grads['b2'] = db2 dx1, dw1, db1 = affine_relu_backward(d1_out, layer1_cache) grads['W1'] = dw1 + self.reg * self.params['W1'] grads['b1'] = db1接下來(lái)將所有相關(guān)的內(nèi)容全部利用一個(gè)solver對(duì)象來(lái)進(jìn)行組合,其中solver.py中已經(jīng)說(shuō)明了該對(duì)象如何使用,所以在這里直接使用即可:
solver = Solver(model, data, update_rule='sgd', optim_config={'learning_rate': 1e-3,}, lr_decay=0.80, num_epochs=10, batch_size=100, print_every=100) solver.train() scores = solver.model.loss(data['X_test']) y_pred = np.argmax(scores, axis=1) acc = np.mean(y_pred == data['y_test']) print("test acc: ",acc)最終在測(cè)試集上達(dá)到的準(zhǔn)確率為52.3%,然后做出圖:
這里注意訓(xùn)練集和驗(yàn)證集的準(zhǔn)確率,如果差別過(guò)大,而且驗(yàn)證集上準(zhǔn)確率提升緩慢,則考慮是否過(guò)擬合。
然后是實(shí)現(xiàn)FullyConnectedNet,都是套路,網(wǎng)絡(luò)層結(jié)構(gòu)如下所示:
{affine - [batch norm] - relu - [dropout]} x (L - 1) - affine - softmax在初始化中,對(duì)W、b、gamma、beta進(jìn)行初始化,按照提示寫(xiě)即可:
shape1 = input_dim for i, shape2 in enumerate(hidden_dims):self.params['W'+str(i+1)] = weight_scale * np.random.randn(shape1, shape2)self.params['b'+str(i+1)] = np.zeros(shape2)shape1 = shape2if self.use_batchnorm:self.params['gamma'+str(i+1)] = np.ones(shape2)self.params['beta'+str(i+1)] = np.zeros(shape2) self.params['W' + str(self.num_layers)] = weight_scale * np.random.randn(shape1, num_classes) self.params['b' + str(self.num_layers)] = np.zeros(num_classes)loss的計(jì)算中注意加上正則化項(xiàng),由于要考慮是否使用dropout層以及use_batchnorm,為方便起見(jiàn),在layer_utils.py中加入一些函數(shù)
def affine_bn_relu_forward(x , w , b, gamma, beta, bn_param):a, fc_cache = affine_forward(x, w, b)bn, bn_cache = batchnorm_forward(a, gamma, beta, bn_param)out, relu_cache = relu_forward(bn)cache = (fc_cache, bn_cache, relu_cache)return out, cachedef affine_bn_relu_backward(dout, cache):fc_cache, bn_cache, relu_cache = cachedbn = relu_backward(dout, relu_cache)da, dgamma, dbeta = batchnorm_backward_alt(dbn, bn_cache)dx, dw, db = affine_backward(da, fc_cache)return dx, dw, db, dgamma, dbeta是否使用dropout層會(huì)影響梯度計(jì)算,所以中間變量利用兩個(gè)dict保存,前向傳遞過(guò)程:
ar_cache = {} dp_cache = {} layer_input = X for i in range(1, self.num_layers):if self.use_batchnorm:layer_input, ar_cache[i-1] = affine_bn_relu_forward(layer_input, self.params['W'+str(i)], self.params['b'+str(i)], self.params['gamma'+str(i)], self.params['beta'+str(i)], self.bn_params[i-1])else:layer_input, ar_cache[i-1] = affine_relu_forward(layer_input, self.params['W'+str(i)], self.params['b'+str(i)])if self.use_dropout:layer_input, dp_cache[i-1] = dropout_forward(layer_input, self.dropout_param)layer_out, ar_cache[self.num_layers] = affine_forward(layer_input,self.params['W'+str(self.num_layers)], self.params['b'+str(self.num_layers)]) scores = layer_out后向傳遞過(guò)程,基本上是將forward pass反向計(jì)算 :
loss, dscores = softmax_loss(scores, y) loss += 0.5 * self.reg * np.sum(self.params['W' + str(self.num_layers)] * self.params['W' + str(self.num_layers)]) dout, dw, db = affine_backward(dscores, ar_cache[self.num_layers]) grads['W'+str(self.num_layers)] = dw + self.reg * self.params['W' + str(self.num_layers)] grads['b'+str(self.num_layers)] = db for i in range(self.num_layers-1):layer = self.num_layers - i - 1 loss += 0.5 * self.reg * np.sum(self.params['W' + str(layer)] * self.params['W' + str(layer)])if self.use_dropout:dout = dropout_backward(dout, dp_cache[layer-1])if self.use_batchnorm:dout, dw, db, dgamma, dbeta = affine_bn_relu_backward(dout, ar_cache[layer-1])grads['gamma' + str(layer)] = dgammagrads['beta' + str(layer)] = dbetaelse:dout, dw, db = affine_relu_backward(dout, ar_cache[layer-1])grads['W' + str(layer)] = dw + self.reg * self.params['W' + str(layer)]grads['b' + str(layer)] = db然后是在一個(gè)小樣本集上進(jìn)行訓(xùn)練,需要在20 epochs內(nèi)達(dá)到100%的訓(xùn)練集準(zhǔn)確率,初始的learning_rate是達(dá)不到要求的,所以需要調(diào)節(jié)該參數(shù),這里的調(diào)參技巧是利用下圖,引自cs231n的neural-networks-3?:
初始參數(shù)(1e-4)的結(jié)果如下:
可推測(cè)其學(xué)習(xí)率過(guò)低,適當(dāng)增大學(xué)習(xí)率(1e-3、1e-2,先大范圍搜索再小范圍搜素)
在learning_rate = 9e-3時(shí)結(jié)果如下:
然后是5層網(wǎng)絡(luò),也是需要調(diào)節(jié)lr和weight_scale,這里先調(diào)節(jié)lr,確定其大概范圍后再對(duì)初始化值進(jìn)行調(diào)節(jié),但weight_scale對(duì)loss的影響是巨大的,這比前面的三層網(wǎng)絡(luò)要復(fù)雜的多,最優(yōu)參數(shù)也更加難以找到,最終learning_rate = 2e-2 , weight_scale = 4e-2:
但其實(shí)早已過(guò)擬合(看val_acc)......
下面是實(shí)現(xiàn)參數(shù)更新的幾種方式,之前我們使用的方式是隨機(jī)梯度下降(SGD,也即是利用minibatch更新一次可訓(xùn)練參數(shù)w、b等),首先第一種是SGD+Momentum(從這里開(kāi)始屬于lecture7的內(nèi)容了):
這個(gè)可以直接根據(jù)notes上的代碼寫(xiě)出:
v = config['momentum'] * v - config['learning_rate'] * dw next_w = w + vrmsprop和adam的類(lèi)似.
rmsprop:
config['cache'] = config['decay_rate'] * config['cache'] + (1 - config['decay_rate']) * (dx**2) next_x = x - config['learning_rate'] * dx / (np.sqrt(config['cache']) + config['epsilon'])adam:
config['t'] += 1 config['m'] = config['beta1'] * config['m'] + (1 - config['beta1']) * dx config['v'] = config['beta2'] * config['v'] + (1 - config['beta2']) * (dx**2) mb = config['m'] / (1 - config['beta1']**config['t']) vb = config['v'] / (1 - config['beta2']**config['t']) next_x = x - config['learning_rate'] * mb / (np.sqrt(vb) + config['epsilon'])Q2: Batch Normalization (25 points)
前向過(guò)程及其后向傳播過(guò)程,這個(gè)可看論文中有詳細(xì)推導(dǎo):
主要公式如下:
代碼如下:
sample_mean = np.mean(x , axis=0) # sample_mean's shape [D,] !!! sample_var = np.var(x, axis=0) # shape is the same as sample_mean x_norm = (x - sample_mean) / np.sqrt(sample_var + eps) out = gamma * x_norm + beta cache = (x, sample_mean, sample_var, x_norm, gamma, beta, eps) running_mean = momentum * running_mean + (1 - momentum) * sample_mean running_var = momentum * running_var + (1 - momentum) * sample_varbatchnorm_backward過(guò)程,較簡(jiǎn)單:
N, D = dout.shape x, sample_mean, sample_var, x_norm, gamma, beta, eps = cache dx_norm = dout * gamma dsample_var = np.sum(dx_norm * (-0.5 * x_norm / (sample_var + eps)), axis=0) dsample_mean = np.sum(-dx_norm / np.sqrt((sample_var + eps)), axis=0) + \dsample_var * np.sum(-2.0/N * (x - sample_mean), axis=0) dx1 = dx_norm / np.sqrt(sample_var + eps) dx2 = dsample_var * (2.0/N) * (x - sample_mean) # from sample_var dx3 = dsample_mean * (1.0 / N * np.ones_like(dout)) # from sample_mean dx = dx1 + dx2 + dx3 dgamma = np.sum(dout * x_norm, axis=0) dbeta = np.sum(dout, axis=0)然后是batchnorm_backward_alt,這個(gè)暫時(shí)沒(méi)有找到比上面更快的方法,包括減少重復(fù)計(jì)算等,但效果不明顯.......
后面的只需要跑相應(yīng)的代碼即可,fc_net.py中在Q1中已經(jīng)實(shí)現(xiàn)了batchnorm......
Q3: Dropout (10 points)
dropout的代碼在教程中都有實(shí)現(xiàn):
mode == 'train'
mask = (np.random.rand(*x.shape) >= p ) / (1 - p) out = x * maskmode == 'test'
out = xbackward
dx = dout * mask然后實(shí)驗(yàn)發(fā)現(xiàn)dropout層的引入使得訓(xùn)練集準(zhǔn)確率上升較慢,測(cè)試集準(zhǔn)確率則更高,這說(shuō)明其有效的防止了過(guò)擬合Q4: Convolutional Networks (30 points)
首先是Convolution: Naive forward pass,與前面的是類(lèi)似的,推薦畫(huà)出相應(yīng)的圖來(lái)更易于寫(xiě)代碼:
N, C, H, W = x.shape F, _, HH, WW = w.shape stride = conv_param['stride'] pad = conv_param['pad'] x_pad = np.pad(x, ((0,), (0,), (pad,), (pad,)), 'constant') out_h = 1 + (H + 2 * pad - HH) // stride out_w = 1 + (W + 2 * pad - WW) // stride out = np.zeros([N, F, out_h, out_w]) for j in range(out_h):for k in range(out_w):h_coord = min(j * stride, H + 2 * pad - HH)w_coord = min(k * stride, W + 2 * pad - WW)for i in range(F):out[:, i, j, k] = np.sum(x_pad[:, :, h_coord:h_coord+HH, w_coord:w_coord+WW] * w[i, :, :, :], axis=(1, 2, 3)) out = out + b[None, :, None, None]然后是Convolution: Naive backward pass,這個(gè)其實(shí)就是逆過(guò)程,注意一點(diǎn)——對(duì)原forward pass的sum過(guò)程的處理:
db = np.sum(dout, axis=(0, 2, 3)) x, w, b, conv_param = cache N, C, H, W = x.shape F, _, HH, WW = w.shape stride = conv_param['stride'] pad = conv_param['pad'] x_pad = np.pad(x, ((0,), (0,), (pad,), (pad,)), 'constant') out_h = 1 + (H + 2 * pad - HH) // stride out_w = 1 + (W + 2 * pad - WW) // stride dx = np.zeros_like(x) dw = np.zeros_like(w) dx_pad = np.zeros_like(x_pad) for j in range(out_h):for k in range(out_w):h_coord = min(j * stride, H + 2 * pad - HH)w_coord = min(k * stride, W + 2 * pad - WW)for i in range(N):dx_pad[i, :, h_coord:h_coord+HH, w_coord:w_coord+WW] += \np.sum((dout[i, :, j, k])[:, None, None, None] * w, axis=0)for i in range(F):dw[i, :, :, :] += np.sum(x_pad[:, :, h_coord:h_coord+HH, w_coord:w_coord+WW] * (dout[:, i, j, k])[:, None, None, None], axis=0) dx = dx_pad[:, :, pad:-pad, pad:-pad]然后是max_pool_forward_naive:
N, C, H, W = x.shape ph, pw, stride = pool_param['pool_height'], pool_param['pool_width'], pool_param['stride'] out_h = 1 + (H - ph) // stride out_w = 1 + (W - pw) // stride out = np.zeros([N, C, out_h, out_w]) for i in range(out_h):for j in range(out_w):h_coord = min(i * stride, H - ph)w_coord = min(j * stride, W - pw)out[:, :, i, j] = np.max(x[:, :, h_coord:h_coord+ph, w_coord:w_coord+pw], axis=(2, 3))而其backward過(guò)程的核心則是如何求解max的梯度,通過(guò)之前的學(xué)習(xí)可知max相當(dāng)于一個(gè)屏蔽梯度的過(guò)程,它會(huì)將除了max值之外的所有數(shù)據(jù)的梯度變?yōu)?,所以由此可得:
x, pool_param = cache N, C, H, W = x.shape ph, pw, stride = pool_param['pool_height'], pool_param['pool_width'], pool_param['stride'] out_h = 1 + (H - ph) // stride out_w = 1 + (W - pw) // stride dx = np.zeros_like(x) for i in range(out_h):for j in range(out_w):h_coord = min(i * stride, H - ph)w_coord = min(j * stride, W - pw)max_num = np.max(x[:, :, h_coord:h_coord+ph, w_coord:w_coord+pw], axis=(2,3))mask = (x[:, :, h_coord:h_coord+ph, w_coord:w_coord+pw] == (max_num)[:,:,None,None])dx[:, :, h_coord:h_coord+ph, w_coord:w_coord+pw] += (dout[:, :, i, j])[:,:,None,None] * mask后面的Three layer ConvNet過(guò)程則更加簡(jiǎn)單,直接堆疊函數(shù)即可:
__init__:這里需要注意的是W2的第一個(gè)參數(shù)的計(jì)算
C, H, W = input_dim self.params['W1'] = weight_scale * np.random.randn(num_filters, C, filter_size, filter_size) self.params['b1'] = np.zeros(num_filters) self.params['W2'] = weight_scale * np.random.randn((H//2)*(W//2)*num_filters, hidden_dim) self.params['b2'] = np.zeros(hidden_dim) self.params['W3'] = weight_scale * np.random.randn(hidden_dim, num_classes) self.params['b3'] = np.zeros(num_classes)loss-forward pass:
layer1, cache1 = conv_relu_pool_forward(X, W1, b1, conv_param, pool_param) layer2, cache2 = affine_relu_forward(layer1, W2, b2) scores, cache3 = affine_forward(layer2, W3, b3)loss-backward pass:
loss, dout = softmax_loss(scores, y) loss += 0.5*self.reg*(np.sum(W1**2)+np.sum(W2**2)+np.sum(W3**2)) dx3, grads['W3'], grads['b3'] = affine_backward(dout, cache3) dx2, grads['W2'], grads['b2'] = affine_relu_backward(dx3, cache2) dx, grads['W1'], grads['b1'] = conv_relu_pool_backward(dx2, cache1)grads['W3'] = grads['W3'] + self.reg * self.params['W3'] grads['W2'] = grads['W2'] + self.reg * self.params['W2'] grads['W1'] = grads['W1'] + self.reg * self.params['W1']接下來(lái)是spatial_batchnorm的相關(guān)過(guò)程:
forward pass根據(jù)提示來(lái)做,求均值和方差均是對(duì)一張圖的某一個(gè)像素點(diǎn)求,例如對(duì)于10張32*32*3的圖片來(lái)說(shuō),分別對(duì)圖片的每一位置求均值和方差(也就是10張圖得到一張均值或方差圖,其大小為32*32*3)
N, C, H, W = x.shape x2 = x.transpose(0, 2, 3, 1).reshape((N*H*W, C)) out, cache = batchnorm_forward(x2, gamma, beta, bn_param) out = out.reshape(N, H, W, C).transpose(0, 3, 1, 2)backward pass:
N, C, H, W = dout.shape dout2 = dout.transpose(0, 2, 3, 1).reshape(N*H*W, C) dx, dgamma, dbeta = batchnorm_backward(dout2, cache) dx = dx.reshape(N, H, W, C).transpose(0, 3, 1, 2)Q5: PyTorch / TensorFlow on CIFAR-10 (10 points)
Q5: Do something extra! (up to +10 points)
總結(jié)
以上是生活随笔為你收集整理的cs231n-assignment2的笔记的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: html列表ppt,HTML教程列表与表
- 下一篇: 最新解决CondaHTTPError: