撒花!《图解深度学习》已开源,16 章带你无障碍深度学习,高中生数学就 ok!
紅色石頭的個人網(wǎng)站:www.redstonewill.com
今天給大家介紹一個深度學(xué)習(xí)入門和進(jìn)階的絕佳教程:《Grokking Deep Learning》,中文譯名為:《圖解深度學(xué)習(xí)》。這本書是由 Manning 出版社出版,并采用 MEAP(訂閱更新方式),從 2016 年 8 月開始,一直采用不定期更新的方式放送。時至今日,這本書終于完本啦,完結(jié)撒花。本書主打入門教學(xué),書中各種插畫豐富生動,是學(xué)習(xí)深度學(xué)習(xí)的入門好書。
作者簡介
這本書的作者 Andrew Trask 是 DeepMind 的科學(xué)家,同時也是 OpenMinded的負(fù)責(zé)人,博士畢業(yè)于牛津大學(xué)。
個人主頁是:https://iamtrask.github.io/
書籍簡介
這本書會教你的從直覺的角度深入學(xué)習(xí)的基礎(chǔ)知識,這樣你就可以了解機(jī)器如何使用深度學(xué)習(xí)進(jìn)行學(xué)習(xí)。這本書沒有重點(diǎn)學(xué)習(xí)框架,如 Torch、TensorFlow 或 Keras。相反,它的重點(diǎn)是教你熟悉框架背后的深層次學(xué)習(xí)方法。一切都將從頭開始,只使用 Python 和 NumPy。這樣,你就能理解訓(xùn)練神經(jīng)系統(tǒng)的每一個細(xì)節(jié)。網(wǎng)絡(luò),而不僅僅是如何使用代碼庫。你應(yīng)該把這本書當(dāng)作掌握其中一個主要框架的必要條件。
該書總共分為兩大部分,第一部分是介紹神經(jīng)網(wǎng)絡(luò)的基礎(chǔ)知識,總共包含 9 章內(nèi)容:
第二部分是介紹深度學(xué)習(xí)中的高級層和架構(gòu),總共包含 7 章內(nèi)容:
《圖解深度學(xué)習(xí)》最大的特點(diǎn)就是在調(diào)包類書籍泛濫的當(dāng)下,這本書可以說是非常良心了,作者通過 10 多章的鋪墊,最終完成了一個微型的深度學(xué)習(xí)庫,這應(yīng)該也是本書的最大價值。
書籍資源
《圖解深度學(xué)習(xí)》已經(jīng)開放了在線版閱讀并開源了書籍中所有的源代碼。
在線閱讀地址:
https://livebook.manning.com/#!/book/grokking-deep-learning/brief-contents/v-12/
代碼地址:
https://github.com/iamtrask/Grokking-Deep-Learning
本書所有的代碼實(shí)現(xiàn)都是基于 Python,并沒有簡單地調(diào)用庫。這樣能夠最大程度地幫助你理解深度學(xué)習(xí)中的概念和原理。例如,CNN 模型的 Python 實(shí)現(xiàn):
import numpy as np, sys np.random.seed(1)from keras.datasets import mnist(x_train, y_train), (x_test, y_test) = mnist.load_data()images, labels = (x_train[0:1000].reshape(1000,28*28) / 255, y_train[0:1000])one_hot_labels = np.zeros((len(labels),10)) for i,l in enumerate(labels): one_hot_labels[i][l] = 1 labels = one_hot_labelstest_images = x_test.reshape(len(x_test),28*28) / 255 test_labels = np.zeros((len(y_test),10)) for i,l in enumerate(y_test): test_labels[i][l] = 1def tanh(x): return np.tanh(x)def tanh2deriv(output): return 1 - (output ** 2)def softmax(x): temp = np.exp(x) return temp / np.sum(temp, axis=1, keepdims=True)alpha, iterations = (2, 300) pixels_per_image, num_labels = (784, 10) batch_size = 128input_rows = 28 input_cols = 28kernel_rows = 3 kernel_cols = 3 num_kernels = 16hidden_size = ((input_rows - kernel_rows) * (input_cols - kernel_cols)) * num_kernels# weights_0_1 = 0.02*np.random.random((pixels_per_image,hidden_size))-0.01 kernels = 0.02*np.random.random((kernel_rows*kernel_cols, num_kernels))-0.01weights_1_2 = 0.2*np.random.random((hidden_size, num_labels)) - 0.1def get_image_section(layer,row_from, row_to, col_from, col_to): section = layer[:,row_from:row_to,col_from:col_to] return section.reshape(-1,1,row_to-row_from, col_to-col_from)for j in range(iterations): correct_cnt = 0 for i in range(int(len(images) / batch_size)): batch_start, batch_end=((i * batch_size),((i+1)*batch_size)) layer_0 = images[batch_start:batch_end] layer_0 = layer_0.reshape(layer_0.shape[0],28,28) layer_0.shapesects = list() for row_start in range(layer_0.shape[1]-kernel_rows): for col_start in range(layer_0.shape[2] - kernel_cols): sect = get_image_section(layer_0, row_start, row_start+kernel_rows, col_start, col_start+kernel_cols) sects.append(sect)expanded_input = np.concatenate(sects,axis=1) es = expanded_input.shape flattened_input = expanded_input.reshape(es[0]*es[1],-1)kernel_output = flattened_input.dot(kernels) layer_1 = tanh(kernel_output.reshape(es[0],-1)) dropout_mask = np.random.randint(2,size=layer_1.shape) layer_1 *= dropout_mask * 2 layer_2 = softmax(np.dot(layer_1,weights_1_2))for k in range(batch_size): labelset = labels[batch_start+k:batch_start+k+1] _inc = int(np.argmax(layer_2[k:k+1]) == np.argmax(labelset)) correct_cnt += _inclayer_2_delta = (labels[batch_start:batch_end]-layer_2)\ / (batch_size * layer_2.shape[0]) layer_1_delta = layer_2_delta.dot(weights_1_2.T) * \ tanh2deriv(layer_1) layer_1_delta *= dropout_mask weights_1_2 += alpha * layer_1.T.dot(layer_2_delta) l1d_reshape = layer_1_delta.reshape(kernel_output.shape) k_update = flattened_input.T.dot(l1d_reshape) kernels -= alpha * k_updatetest_correct_cnt = 0for i in range(len(test_images)):layer_0 = test_images[i:i+1] # layer_1 = tanh(np.dot(layer_0,weights_0_1)) layer_0 = layer_0.reshape(layer_0.shape[0],28,28) layer_0.shapesects = list() for row_start in range(layer_0.shape[1]-kernel_rows): for col_start in range(layer_0.shape[2] - kernel_cols): sect = get_image_section(layer_0, row_start, row_start+kernel_rows, col_start, col_start+kernel_cols) sects.append(sect)expanded_input = np.concatenate(sects,axis=1) es = expanded_input.shape flattened_input = expanded_input.reshape(es[0]*es[1],-1)kernel_output = flattened_input.dot(kernels) layer_1 = tanh(kernel_output.reshape(es[0],-1)) layer_2 = np.dot(layer_1,weights_1_2)test_correct_cnt += int(np.argmax(layer_2) == np.argmax(test_labels[i:i+1])) if(j % 1 == 0): sys.stdout.write("\n"+ \ "I:" + str(j) + \ " Test-Acc:"+str(test_correct_cnt/float(len(test_images)))+\ " Train-Acc:" + str(correct_cnt/float(len(images))))資源下載
最后,本書的的前 11 章電子版 pdf 和所有源代碼已經(jīng)打包完畢,需要的可以按照以下方式獲取:
1.掃描下方二維碼關(guān)注 “AI有道” 公眾號
2.公眾號后臺回復(fù)關(guān)鍵詞:GDL
總結(jié)
以上是生活随笔為你收集整理的撒花!《图解深度学习》已开源,16 章带你无障碍深度学习,高中生数学就 ok!的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 带颜色文字的列表框
- 下一篇: Python2和Python3中除法运算