学习Caffe(二)使用Caffe:Caffe加载模型+Caffe添加新层+Caffe finetune
版權(quán)聲明:本文為博主原創(chuàng)文章,未經(jīng)博主允許不得轉(zhuǎn)載。 https://blog.csdn.net/u014230646/article/details/51934150
如何使用Caffe
Caffe教程(http://robots.princeton.edu/courses/COS598/2015sp/slides/Caffe/caffe_tutorial.pdf)
預(yù)備知識(shí)
Google Protocol Buffer
https://developers.google.com/protocol-buffers/docs/cpptutorial?
Caffe數(shù)據(jù)的讀取、運(yùn)算、存儲(chǔ)都是采用Google Protocol Buffer來(lái)進(jìn)行的。PB是一種輕便、高效的結(jié)構(gòu)化數(shù)據(jù)存儲(chǔ)格式,可以用于結(jié)構(gòu)化數(shù)據(jù)串行化,很適合做數(shù)據(jù)存儲(chǔ)或 RPC 數(shù)據(jù)交換格式。它可用于通訊協(xié)議、數(shù)據(jù)存儲(chǔ)等領(lǐng)域的語(yǔ)言無(wú)關(guān)、平臺(tái)無(wú)關(guān)、可擴(kuò)展的序列化結(jié)構(gòu)數(shù)據(jù)格式。是一種效率和兼容性都很優(yōu)秀的二進(jìn)制數(shù)據(jù)傳輸格式,目前提供了 C++、Java、Python 三種語(yǔ)言的 API。Caffe采用的是C++和Python的API。
轉(zhuǎn)載自https://github.com/shicai/Caffe_Manual/blob/master/ReadMe.md
初始化網(wǎng)絡(luò)
#include "caffe/caffe.hpp" #include <string> #include <vector> using namespace caffe;char *proto = "H:\\Models\\Caffe\\deploy.prototxt"; /* 加載CaffeNet的配置 */ Phase phase = TEST; /* or TRAIN */ Caffe::set_mode(Caffe::CPU); // Caffe::set_mode(Caffe::GPU); // Caffe::SetDevice(0);//! Note: 后文所有提到的net,都是這個(gè)net boost::shared_ptr< Net<float> > net(new caffe::Net<float>(proto, phase));加載已訓(xùn)練好的模型
char *model = "H:\\Models\\Caffe\\bvlc_reference_caffenet.caffemodel"; net->CopyTrainedLayersFrom(model);讀取模型中的每層的結(jié)構(gòu)配置參數(shù)
char *model = "H:\\Models\\Caffe\\bvlc_reference_caffenet.caffemodel"; NetParameter param; ReadNetParamsFromBinaryFileOrDie(model, ¶m); int num_layers = param.layer_size(); for (int i = 0; i < num_layers; ++i) {// 結(jié)構(gòu)配置參數(shù):name,type,kernel size,pad,stride等LOG(ERROR) << "Layer " << i << ":" << param.layer(i).name() << "\t" << param.layer(i).type();if (param.layer(i).type() == "Convolution"){ConvolutionParameter conv_param = param.layer(i).convolution_param();LOG(ERROR) << "\t\tkernel size: " << conv_param.kernel_size()<< ", pad: " << conv_param.pad()<< ", stride: " << conv_param.stride();} }讀取圖像均值
char *mean_file = "H:\\Models\\Caffe\\imagenet_mean.binaryproto"; Blob<float> image_mean; BlobProto blob_proto; const float *mean_ptr; unsigned int num_pixel;bool succeed = ReadProtoFromBinaryFile(mean_file, &blob_proto); if (succeed) {image_mean.FromProto(blob_proto);num_pixel = image_mean.count(); /* NCHW=1x3x256x256=196608 */mean_ptr = (const float *) image_mean.cpu_data(); }根據(jù)指定數(shù)據(jù),前向傳播網(wǎng)絡(luò)
//! Note: data_ptr指向已經(jīng)處理好(去均值的,符合網(wǎng)絡(luò)輸入圖像的長(zhǎng)寬和Batch Size)的數(shù)據(jù) void caffe_forward(boost::shared_ptr< Net<float> > & net, float *data_ptr) {Blob<float>* input_blobs = net->input_blobs()[0];switch (Caffe::mode()){case Caffe::CPU:memcpy(input_blobs->mutable_cpu_data(), data_ptr,sizeof(float) * input_blobs->count());break;case Caffe::GPU:cudaMemcpy(input_blobs->mutable_gpu_data(), data_ptr,sizeof(float) * input_blobs->count(), cudaMemcpyHostToDevice);break;default:LOG(FATAL) << "Unknown Caffe mode.";} net->ForwardPrefilled(); }根據(jù)Feature層的名字獲取其在網(wǎng)絡(luò)中的Index
//! Note: Net的Blob是指,每個(gè)層的輸出數(shù)據(jù),即Feature Maps // char *query_blob_name = "conv1"; unsigned int get_blob_index(boost::shared_ptr< Net<float> > & net, char *query_blob_name) {std::string str_query(query_blob_name); vector< string > const & blob_names = net->blob_names();for( unsigned int i = 0; i != blob_names.size(); ++i ) { if( str_query == blob_names[i] ) { return i;} }LOG(FATAL) << "Unknown blob name: " << str_query; }讀取網(wǎng)絡(luò)指定Feature層數(shù)據(jù)
//! Note: 根據(jù)CaffeNet的deploy.prototxt文件,該Net共有15個(gè)Blob,從data一直到prob char *query_blob_name = "conv1"; /* data, conv1, pool1, norm1, fc6, prob, etc */ unsigned int blob_id = get_blob_index(net, query_blob_name);boost::shared_ptr<Blob<float> > blob = net->blobs()[blob_id]; unsigned int num_data = blob->count(); /* NCHW=10x96x55x55 */ const float *blob_ptr = (const float *) blob->cpu_data();根據(jù)文件列表,獲取特征,并存為二進(jìn)制文件
詳見(jiàn)get_features.cpp文件:
主要包括三個(gè)步驟?
- 生成文件列表,格式與訓(xùn)練用的類似,每行一個(gè)圖像?
包括文件全路徑、空格、標(biāo)簽(沒(méi)有的話,可以置0)?
- 根據(jù)train_val或者deploy的prototxt,改寫生成feat.prototxt?
主要是將輸入層改為image_data層,最后加上prob和argmax(為了輸出概率和Top1/5預(yù)測(cè)標(biāo)簽)?
- 根據(jù)指定參數(shù),運(yùn)行程序后會(huì)生成若干個(gè)二進(jìn)制文件,可以用MATLAB讀取數(shù)據(jù),進(jìn)行分析
根據(jù)Layer的名字獲取其在網(wǎng)絡(luò)中的Index
//! Note: Layer包括神經(jīng)網(wǎng)絡(luò)所有層,比如,CaffeNet共有23層 // char *query_layer_name = "conv1"; unsigned int get_layer_index(boost::shared_ptr< Net<float> > & net, char *query_layer_name) {std::string str_query(query_layer_name); vector< string > const & layer_names = net->layer_names();for( unsigned int i = 0; i != layer_names.size(); ++i ) { if( str_query == layer_names[i] ) { return i;} }LOG(FATAL) << "Unknown layer name: " << str_query; }讀取指定Layer的權(quán)重?cái)?shù)據(jù)
//! Note: 不同于Net的Blob是Feature Maps,Layer的Blob是指Conv和FC等層的Weight和Bias char *query_layer_name = "conv1"; const float *weight_ptr, *bias_ptr; unsigned int layer_id = get_layer_index(net, query_layer_name); boost::shared_ptr<Layer<float> > layer = net->layers()[layer_id]; std::vector<boost::shared_ptr<Blob<float> >> blobs = layer->blobs(); if (blobs.size() > 0) {weight_ptr = (const float *) blobs[0]->cpu_data();bias_ptr = (const float *) blobs[1]->cpu_data(); }//! Note: 訓(xùn)練模式下,讀取指定Layer的梯度數(shù)據(jù),與此相似,唯一的區(qū)別是將cpu_data改為cpu_diff修改某層的Weight數(shù)據(jù)
const float* data_ptr; /* 指向待寫入數(shù)據(jù)的指針, 源數(shù)據(jù)指針*/ float* weight_ptr = NULL; /* 指向網(wǎng)絡(luò)中某層權(quán)重的指針,目標(biāo)數(shù)據(jù)指針*/ unsigned int data_size; /* 待寫入的數(shù)據(jù)量 */ char *layer_name = "conv1"; /* 需要修改的Layer名字 */unsigned int layer_id = get_layer_index(net, query_layer_name); boost::shared_ptr<Blob<float> > blob = net->layers()[layer_id]->blobs()[0];CHECK(data_size == blob->count()); switch (Caffe::mode()) { case Caffe::CPU:weight_ptr = blob->mutable_cpu_data();break; case Caffe::GPU:weight_ptr = blob->mutable_gpu_data();break; default:LOG(FATAL) << "Unknown Caffe mode"; } caffe_copy(blob->count(), data_ptr, weight_ptr);//! Note: 訓(xùn)練模式下,手動(dòng)修改指定Layer的梯度數(shù)據(jù),與此相似 // mutable_cpu_data改為mutable_cpu_diff,mutable_gpu_data改為mutable_gpu_diff保存新的模型
char* weights_file = "bvlc_reference_caffenet_new.caffemodel"; NetParameter net_param; net->ToProto(&net_param, false); WriteProtoToBinaryFile(net_param, weights_file);Caffe中添加新的層
https://github.com/BVLC/caffe/wiki/Development?
用預(yù)訓(xùn)練網(wǎng)絡(luò)參數(shù)初始化
caffe的參數(shù)初始化是根據(jù)名字從caffemodel讀取的,只要修改名字,自己想要修改的層就能隨機(jī)初始化。
- 修改名字,保留前面幾層的參數(shù),同時(shí)后面的參數(shù)設(shè)置較高的學(xué)習(xí)率,基礎(chǔ)學(xué)習(xí)率大概0.00001左右。
總結(jié)
以上是生活随笔為你收集整理的学习Caffe(二)使用Caffe:Caffe加载模型+Caffe添加新层+Caffe finetune的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: caffe之特征图可视化及特征提取
- 下一篇: 基于深度学习的语义分割代码库