tensorflow权重初始化
一,用10層神經網絡,每一層的參數都是隨機正態分布,均值為0,標準差為0.01
#10層神經網絡 data = tf.constant(np.random.randn(2000, 800).astype('float32')) layer_sizes = [800 - 50 * i for i in range(0, 10)] num_layers = len(layer_sizes)fcs = [] for i in range(0, num_layers - 1):X = data if i == 0 else fcs[i - 1]node_in = layer_sizes[i]node_out = layer_sizes[i + 1]W = tf.Variable(np.random.randn(node_in, node_out).astype('float32')) * 0.01fc = tf.matmul(X, W)# fc = tf.contrib.layers.batch_norm(fc, center=True, scale=True,# is_training=True)fc = tf.nn.tanh(fc)fcs.append(fc)# with tf.Session() as sess:sess.run(tf.global_variables_initializer())print('input mean {0:.5f} and std {1:.5f}'.format(np.mean(data.eval()),np.std(data.eval())))for idx, fc in enumerate(fcs):print('layer {0} mean {1:.5f} and std {2:.5f}'.format(idx + 1, np.mean(fc.eval()),np.std(fc.eval())))for idx, fc in enumerate(fcs):print(fc)plt.subplot(1, len(fcs), idx + 1)#繪制直方圖30個binplt.hist(fc.eval().flatten(), 30, range=[-1, 1])plt.xlabel('layer ' + str(idx + 1))plt.yticks([])#把y軸刻度關掉plt.show()每一層輸出值分布的直方圖:
隨著層數的增加,我們看到輸出值迅速向0靠攏,在后幾層中,幾乎所有的輸出值 x 都很接近0!回憶優化神經網絡的back propagation算法,根據鏈式法則,gradient等于當前函數的gradient乘以后一層的gradient,這意味著輸出值 x 是計算gradient中的乘法因子,直接導致gradient很小,使得參數難以被更新!
二,標準差改為1
幾乎所有的值集中在-1或1附近,神經元飽和了!注意到tanh在-1和1附近的gradient都接近0,這同樣導致了gradient太小,參數難以被更新
三,Xavier initialization可以解決上面的問題!其初始化方式也并不復雜。Xavier初始化的基本思想是保持輸入和輸出的方差一致,這樣就避免了所有輸出值都趨向于0。Xavier initialization是由Xavier Glorot et al.在2010年提出,He initialization是由Kaiming He et al.在2015年提出,Batch Normalization是由Sergey Ioffe et al.在2015年提出。
#10層神經網絡 data = tf.constant(np.random.randn(2000, 800).astype('float32')) layer_sizes = [800 - 50 * i for i in range(0, 10)] num_layers = len(layer_sizes)fcs = [] for i in range(0, num_layers - 1):X = data if i == 0 else fcs[i - 1]node_in = layer_sizes[i]node_out = layer_sizes[i + 1]W = tf.Variable(np.random.randn(node_in, node_out).astype('float32'))/np.sqrt(node_in)fc = tf.matmul(X, W)# fc = tf.contrib.layers.batch_norm(fc, center=True, scale=True,# is_training=True)fc = tf.nn.tanh(fc)fcs.append(fc)# with tf.Session() as sess:sess.run(tf.global_variables_initializer())print('input mean {0:.5f} and std {1:.5f}'.format(np.mean(data.eval()),np.std(data.eval())))for idx, fc in enumerate(fcs):print('layer {0} mean {1:.5f} and std {2:.5f}'.format(idx + 1, np.mean(fc.eval()),np.std(fc.eval())))for idx, fc in enumerate(fcs):print(fc)plt.subplot(1, len(fcs), idx + 1)#繪制直方圖30個binplt.hist(fc.eval().flatten(), 30, range=[-1, 1])plt.xlabel('layer ' + str(idx + 1))plt.yticks([])#把y軸刻度關掉plt.show()輸出值在很多層之后依然保持著良好的分布,
四,激活函數替換成relu
#10層神經網絡 data = tf.constant(np.random.randn(2000, 800).astype('float32')) layer_sizes = [800 - 50 * i for i in range(0, 10)] num_layers = len(layer_sizes)fcs = [] for i in range(0, num_layers - 1):X = data if i == 0 else fcs[i - 1]node_in = layer_sizes[i]node_out = layer_sizes[i + 1]W = tf.Variable(np.random.randn(node_in, node_out).astype('float32'))/np.sqrt(node_in)fc = tf.matmul(X, W)# fc = tf.contrib.layers.batch_norm(fc, center=True, scale=True,# is_training=True)fc = tf.nn.relu(fc)fcs.append(fc)# with tf.Session() as sess:sess.run(tf.global_variables_initializer())print('input mean {0:.5f} and std {1:.5f}'.format(np.mean(data.eval()),np.std(data.eval())))for idx, fc in enumerate(fcs):print('layer {0} mean {1:.5f} and std {2:.5f}'.format(idx + 1, np.mean(fc.eval()),np.std(fc.eval())))for idx, fc in enumerate(fcs):print(fc)plt.subplot(1, len(fcs), idx + 1)#繪制直方圖30個binplt.hist(fc.eval().flatten(), 30, range=[-1, 1])plt.xlabel('layer ' + str(idx + 1))plt.yticks([])#把y軸刻度關掉plt.show()前面結果還好,后面的趨勢卻是越來越接近0
五,He initialization的思想是:在ReLU網絡中,假定每一層有一半的神經元被激活,另一半為0,所以,要保持variance不變,只需要在Xavier的基礎上再除以2:
W = tf.Variable(np.random.randn(node_in,node_out)) / np.sqrt(node_in/2)六,Batch Normalization是一種巧妙而粗暴的方法來削弱bad initialization的影響,我們想要的是在非線性activation之前,輸出值應該有比較好的分布(例如高斯分布),以便于back propagation時計算gradient,更新weight。
#10層神經網絡 data = tf.constant(np.random.randn(2000, 800).astype('float32')) layer_sizes = [800 - 50 * i for i in range(0, 10)] num_layers = len(layer_sizes)fcs = [] for i in range(0, num_layers - 1):X = data if i == 0 else fcs[i - 1]node_in = layer_sizes[i]node_out = layer_sizes[i + 1]W = tf.Variable(np.random.randn(node_in, node_out).astype('float32'))/np.sqrt(node_in)fc = tf.matmul(X, W)fc = tf.contrib.layers.batch_norm(fc, center=True, scale=True,is_training=True)fc = tf.nn.relu(fc)fcs.append(fc)# with tf.Session() as sess:sess.run(tf.global_variables_initializer())print('input mean {0:.5f} and std {1:.5f}'.format(np.mean(data.eval()),np.std(data.eval())))for idx, fc in enumerate(fcs):print('layer {0} mean {1:.5f} and std {2:.5f}'.format(idx + 1, np.mean(fc.eval()),np.std(fc.eval())))for idx, fc in enumerate(fcs):print(fc)plt.subplot(1, len(fcs), idx + 1)#繪制直方圖30個binplt.hist(fc.eval().flatten(), 30, range=[-1, 1])plt.xlabel('layer ' + str(idx + 1))plt.yticks([])#把y軸刻度關掉plt.show()?
總結
以上是生活随笔為你收集整理的tensorflow权重初始化的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 基于Keras的卷积神经网络用于猫狗分类
- 下一篇: 图像配准之特征点匹配的思考