caffe源碼解析-caffe.proto
《caffe源碼解析-caffe.proto》由會(huì)員分享,可在線閱讀,更多相關(guān)《caffe源碼解析-caffe.proto(18頁(yè)珍藏版)》請(qǐng)?jiān)谘b配圖網(wǎng)上搜索。
1、文檔供參考,可復(fù)制、編制,期待您的好評(píng)與關(guān)注! 要看caffe源碼,首先應(yīng)該看的就是caffe.proto。? 它位于…\src\caffe\proto目錄下,在這個(gè)文件夾下還有一個(gè).pb.cc和一個(gè).pb.h文件,這兩個(gè)文件都是由caffe.proto編譯而來(lái)的。? 在caffe.proto中定義了很多結(jié)構(gòu)化數(shù)據(jù),包括: · BlobProto · Datum · FillerParameter · NetParameter · SolverParameter · SolverState · LayerParameter · ConcatParameter · C
2、onvolutionParameter · DataParameter · DropoutParameter · HDF5DataParameter · HDF5OutputParameter · ImageDataParameter · InfogainLossParameter · InnerProductParameter · LRNParameter · MemoryDataParameter · PoolingParameter · PowerParameter · WindowDataParameter · V0LayerParameter caffe.
3、proto中的幾個(gè)重要數(shù)據(jù)類型 caffe.pb.cc里面的東西都是從caffe.proto編譯而來(lái)的,無(wú)非就是一些關(guān)于這些數(shù)據(jù)結(jié)構(gòu)(類)的標(biāo)準(zhǔn)化操作,比如 void CopyFrom(); void MergeFrom(); void Clear(); bool IsInitialized() const; int ByteSize() const; bool MergePartialFromCodedStream(); void SerializeWithCachedSizes() const; SerializeWithCachedSiz
4、esToArray() const; int GetCachedSize() void SharedCtor(); void SharedDtor(); void SetCachedSize() const; <0> BlobProto message BlobProto {//blob的屬性以及blob中的數(shù)據(jù)(data\diff) optional int32 num = 1 [default = 0]; optional int32 channels = 2 [default = 0]; optional int32 height = 3 [
5、default = 0]; optional int32 width = 4 [default = 0]; repeated float data = 5 [packed = true]; repeated float diff = 6 [packed = true]; } <1> Datum message Datum { optional int32 channels = 1; optional int32 height = 2; optional int32 width = 3; optional bytes data = 4;//真實(shí)
6、的圖像數(shù)據(jù),以字節(jié)存儲(chǔ)(bytes) optional int32 label = 5; repeated float float_data = 6;//datum也能存float類型的數(shù)據(jù)(float) } <2> LayerParameter message LayerParameter { repeated string bottom = 2; //輸入的blob的名字(string) repeated string top = 3; //輸出的blob的名字(string) optional string name = 4; //層的名字 en
7、um LayerType { //層的枚舉(enum,和c++中的enum一樣) NONE = 0; ACCURACY = 1; BNLL = 2; CONCAT = 3; CONVOLUTION = 4; DATA = 5; DROPOUT = 6; EUCLIDEAN_LOSS = 7; ELTWISE_PRODUCT = 25; FLATTEN = 8; HDF5_DATA = 9; HDF5_OUTPUT = 10; HINGE_LOSS = 28;
8、 IM2COL = 11; IMAGE_DATA = 12; INFOGAIN_LOSS = 13; INNER_PRODUCT = 14; LRN = 15; MEMORY_DATA = 29; MULTINOMIAL_LOGISTIC_LOSS = 16; POOLING = 17; POWER = 26; RELU = 18; SIGMOID = 19; SIGMOID_CROSS_ENTROPY_LOSS = 27; SOFTMAX = 20; SOFTM
9、AX_LOSS = 21; SPLIT = 22; TANH = 23; WINDOW_DATA = 24; } optional LayerType type = 5; // 層的類型 repeated BlobProto blobs = 6; //blobs的數(shù)值參數(shù) repeated float blobs_lr = 7; //學(xué)習(xí)速率(repeated),如果你想設(shè)置一個(gè)blob的學(xué)習(xí)速率,你需要設(shè)置所有blob的學(xué)習(xí)速率。 repeated float weight_decay = 8; //權(quán)值衰減(repeated)
10、 // 相對(duì)于某一特定層的參數(shù)(optional) optional ConcatParameter concat_param = 9; optional ConvolutionParameter convolution_param = 10; optional DataParameter data_param = 11; optional DropoutParameter dropout_param = 12; optional HDF5DataParameter hdf5_data_param = 13; optional HDF5OutputPa
11、rameter hdf5_output_param = 14; optional ImageDataParameter image_data_param = 15; optional InfogainLossParameter infogain_loss_param = 16; optional InnerProductParameter inner_product_param = 17; optional LRNParameter lrn_param = 18; optional MemoryDataParameter memory_data_param = 2
12、2; optional PoolingParameter pooling_param = 19; optional PowerParameter power_param = 21; optional WindowDataParameter window_data_param = 20; optional V0LayerParameter layer = 1; } <3> NetParameter message NetParameter { optional string name = 1;//網(wǎng)絡(luò)的名字 repeated LayerParameter
13、 layers = 2; //repeated類似于數(shù)組 repeated string input = 3;//輸入層blob的名字 repeated int32 input_dim = 4;//輸入層blob的維度,應(yīng)該等于(4*#input) optional bool force_backward = 5 [default = false];//網(wǎng)絡(luò)是否進(jìn)行反向傳播。如果設(shè)置為否,則由網(wǎng)絡(luò)的結(jié)構(gòu)和學(xué)習(xí)速率來(lái)決定是否進(jìn)行反向傳播。 } <4> SolverParameter message SolverParameter { optional string
14、train_net = 1; // 訓(xùn)練網(wǎng)絡(luò)的proto file optional string test_net = 2; // 測(cè)試網(wǎng)絡(luò)的proto file optional int32 test_iter = 3 [default = 0]; // 每次測(cè)試時(shí)的迭代次數(shù) optional int32 test_interval = 4 [default = 0]; // 兩次測(cè)試的間隔迭代次數(shù) optional bool test_compute_loss = 19 [default = false]; optional float base_lr =
15、5; // 基本學(xué)習(xí)率 optional int32 display = 6; // 兩次顯示的間隔迭代次數(shù) optional int32 max_iter = 7; // 最大迭代次數(shù) optional string lr_policy = 8; // 學(xué)習(xí)速率衰減方式 optional float gamma = 9; // 關(guān)于梯度下降的一個(gè)參數(shù) optional float power = 10; // 計(jì)算學(xué)習(xí)率的一個(gè)參數(shù) optional float momentum = 11; // 動(dòng)量 optional float weight_de
16、cay = 12; // 權(quán)值衰減 optional int32 stepsize = 13; // 學(xué)習(xí)速率的衰減步長(zhǎng) optional int32 snapshot = 14 [default = 0]; // snapshot的間隔 optional string snapshot_prefix = 15; // snapshot的前綴 optional bool snapshot_diff = 16 [default = false]; // 是否對(duì)于 diff 進(jìn)行 snapshot enum SolverMode { CPU = 0;
17、 GPU = 1; } optional SolverMode solver_mode = 17 [default = GPU]; // solver的模式,默認(rèn)為GPU optional int32 device_id = 18 [default = 0]; // GPU的ID optional int64 random_seed = 20 [default = -1]; // 隨機(jī)數(shù)種子 } caffe.proto源碼 // Copyright 2014 BVLC and contributors. package caffe; message Bl
18、obProto { optional int32 num = 1 [default = 0]; optional int32 channels = 2 [default = 0]; optional int32 height = 3 [default = 0]; optional int32 width = 4 [default = 0]; repeated float data = 5 [packed = true]; repeated float diff = 6 [packed = true]; } // The BlobProtoVector
19、 is simply a way to pass multiple blobproto instances // around. message BlobProtoVector { repeated BlobProto blobs = 1; } message Datum { optional int32 channels = 1; optional int32 height = 2; optional int32 width = 3; // the actual image data, in bytes optional bytes data
20、= 4; optional int32 label = 5; // Optionally, the datum could also hold float data. repeated float float_data = 6; } message FillerParameter { // The filler type. optional string type = 1 [default = 'constant']; optional float value = 2 [default = 0]; // the value in constant f
21、iller optional float min = 3 [default = 0]; // the min value in uniform filler optional float max = 4 [default = 1]; // the max value in uniform filler optional float mean = 5 [default = 0]; // the mean value in Gaussian filler optional float std = 6 [default = 1]; // the std value in Ga
22、ussian filler // The expected number of non-zero input weights for a given output in // Gaussian filler -- the default -1 means don't perform sparsification. optional int32 sparse = 7 [default = -1]; } message NetParameter { optional string name = 1; // consider giving the network a
23、name repeated LayerParameter layers = 2; // a bunch of layers. // The input blobs to the network. repeated string input = 3; // The dim of the input blobs. For each input blob there should be four // values specifying the num, channels, height and width of the input blob. // Thus,
24、there should be a total of (4 * #input) numbers. repeated int32 input_dim = 4; // Whether the network will force every layer to carry out backward operation. // If set False, then whether to carry out backward is determined // automatically according to the net structure and learning rat
25、es. optional bool force_backward = 5 [default = false]; } message SolverParameter { optional string train_net = 1; // The proto file for the training net. optional string test_net = 2; // The proto file for the testing net. // The number of iterations for each testing phase. optio
26、nal int32 test_iter = 3 [default = 0]; // The number of iterations between two testing phases. optional int32 test_interval = 4 [default = 0]; optional bool test_compute_loss = 19 [default = false]; optional float base_lr = 5; // The base learning rate // the number of iterations betw
27、een displaying info. If display = 0, no info // will be displayed. optional int32 display = 6; optional int32 max_iter = 7; // the maximum number of iterations optional string lr_policy = 8; // The learning rate decay policy. optional float gamma = 9; // The parameter to compute the l
28、earning rate. optional float power = 10; // The parameter to compute the learning rate. optional float momentum = 11; // The momentum value. optional float weight_decay = 12; // The weight decay. optional int32 stepsize = 13; // the stepsize for learning rate policy "step" optional in
29、t32 snapshot = 14 [default = 0]; // The snapshot interval optional string snapshot_prefix = 15; // The prefix for the snapshot. // whether to snapshot diff in the results or not. Snapshotting diff will help // debugging but the final protocol buffer size will be much larger. optional boo
30、l snapshot_diff = 16 [default = false]; // the mode solver will use: 0 for CPU and 1 for GPU. Use GPU in default. enum SolverMode { CPU = 0; GPU = 1; } optional SolverMode solver_mode = 17 [default = GPU]; // the device_id will that be used in GPU mode. Use device_id = 0 in
31、default. optional int32 device_id = 18 [default = 0]; // If non-negative, the seed with which the Solver will initialize the Caffe // random number generator -- useful for reproducible results. Otherwise, // (and by default) initialize using a seed derived from the system clock. optio
32、nal int64 random_seed = 20 [default = -1]; } // A message that stores the solver snapshots message SolverState { optional int32 iter = 1; // The current iteration optional string learned_net = 2; // The file that stores the learned net. repeated BlobProto history = 3; // The history fo
33、r sgd solvers } // Update the next available ID when you add a new LayerParameter field. // // LayerParameter next available ID: 23 (last added: memory_data_param) message LayerParameter { repeated string bottom = 2; // the name of the bottom blobs repeated string top = 3; // the name o
34、f the top blobs optional string name = 4; // the layer name // Add new LayerTypes to the enum below in lexicographical order (other than // starting with NONE), starting with the next available ID in the comment // line above the enum. Update the next available ID when you add a new
35、 // LayerType. // // LayerType next available ID: 30 (last added: MEMORY_DATA) enum LayerType { // "NONE" layer type is 0th enum element so that we don't cause confusion // by defaulting to an existent LayerType (instead, should usually error if // the type is unspecified).
36、 NONE = 0; ACCURACY = 1; BNLL = 2; CONCAT = 3; CONVOLUTION = 4; DATA = 5; DROPOUT = 6; EUCLIDEAN_LOSS = 7; ELTWISE_PRODUCT = 25; FLATTEN = 8; HDF5_DATA = 9; HDF5_OUTPUT = 10; HINGE_LOSS = 28; IM2COL = 11; IMAGE_DATA = 12;
37、INFOGAIN_LOSS = 13; INNER_PRODUCT = 14; LRN = 15; MEMORY_DATA = 29; MULTINOMIAL_LOGISTIC_LOSS = 16; POOLING = 17; POWER = 26; RELU = 18; SIGMOID = 19; SIGMOID_CROSS_ENTROPY_LOSS = 27; SOFTMAX = 20; SOFTMAX_LOSS = 21; SPLIT = 22; TANH =
38、 23; WINDOW_DATA = 24; } optional LayerType type = 5; // the layer type from the enum above // The blobs containing the numeric parameters of the layer repeated BlobProto blobs = 6; // The ratio that is multiplied on the global learning rate. If you want to // set the learni
39、ng ratio for one blob, you need to set it for all blobs. repeated float blobs_lr = 7; // The weight decay that is multiplied on the global weight decay. repeated float weight_decay = 8; // Parameters for particular layer types. optional ConcatParameter concat_param = 9; optional
40、ConvolutionParameter convolution_param = 10; optional DataParameter data_param = 11; optional DropoutParameter dropout_param = 12; optional HDF5DataParameter hdf5_data_param = 13; optional HDF5OutputParameter hdf5_output_param = 14; optional ImageDataParameter image_data_param = 15;
41、 optional InfogainLossParameter infogain_loss_param = 16; optional InnerProductParameter inner_product_param = 17; optional LRNParameter lrn_param = 18; optional MemoryDataParameter memory_data_param = 22; optional PoolingParameter pooling_param = 19; optional PowerParameter power_pa
42、ram = 21; optional WindowDataParameter window_data_param = 20; // DEPRECATED: The layer parameters specified as a V0LayerParameter. // This should never be used by any code except to upgrade to the new // LayerParameter specification. optional V0LayerParameter layer = 1; } // Me
43、ssage that stores parameters used by ConcatLayer message ConcatParameter { // Concat Layer needs to specify the dimension along the concat will happen, // the other dimensions must be the same for all the bottom blobs // By default it will concatenate blobs along channels dimension opti
44、onal uint32 concat_dim = 1 [default = 1]; } // Message that stores parameters used by ConvolutionLayer message ConvolutionParameter { optional uint32 num_output = 1; // The number of outputs for the layer optional bool bias_term = 2 [default = true]; // whether to have bias terms optio
45、nal uint32 pad = 3 [default = 0]; // The padding size optional uint32 kernel_size = 4; // The kernel size optional uint32 group = 5 [default = 1]; // The group size for group conv optional uint32 stride = 6 [default = 1]; // The stride optional FillerParameter weight_filler = 7; // The f
46、iller for the weight optional FillerParameter bias_filler = 8; // The filler for the bias } // Message that stores parameters used by DataLayer message DataParameter { // Specify the data source. optional string source = 1; // For data pre-processing, we can do simple scaling and su
47、btracting the // data mean, if provided. Note that the mean subtraction is always carried // out before scaling. optional float scale = 2 [default = 1]; optional string mean_file = 3; // Specify the batch size. optional uint32 batch_size = 4; // Specify if we would like to rando
48、mly crop an image. optional uint32 crop_size = 5 [default = 0]; // Specify if we want to randomly mirror data. optional bool mirror = 6 [default = false]; // The rand_skip variable is for the data layer to skip a few data points // to avoid all asynchronous sgd clients to start at the
49、 same point. The skip // point would be set as rand_skip * rand(0,1). Note that rand_skip should not // be larger than the number of keys in the leveldb. optional uint32 rand_skip = 7 [default = 0]; } // Message that stores parameters used by DropoutLayer message DropoutParameter {
50、optional float dropout_ratio = 1 [default = 0.5]; // dropout ratio } // Message that stores parameters used by HDF5DataLayer message HDF5DataParameter { // Specify the data source. optional string source = 1; // Specify the batch size. optional uint32 batch_size = 2; } // Messag
51、e that stores parameters used by HDF5OutputLayer message HDF5OutputParameter { optional string file_name = 1; } // Message that stores parameters used by ImageDataLayer message ImageDataParameter { // Specify the data source. optional string source = 1; // For data pre-processing,
52、we can do simple scaling and subtracting the // data mean, if provided. Note that the mean subtraction is always carried // out before scaling. optional float scale = 2 [default = 1]; optional string mean_file = 3; // Specify the batch size. optional uint32 batch_size = 4; // Sp
53、ecify if we would like to randomly crop an image. optional uint32 crop_size = 5 [default = 0]; // Specify if we want to randomly mirror data. optional bool mirror = 6 [default = false]; // The rand_skip variable is for the data layer to skip a few data points // to avoid all asynchron
54、ous sgd clients to start at the same point. The skip // point would be set as rand_skip * rand(0,1). Note that rand_skip should not // be larger than the number of keys in the leveldb. optional uint32 rand_skip = 7 [default = 0]; // Whether or not ImageLayer should shuffle the list of fi
55、les at every epoch. optional bool shuffle = 8 [default = false]; // It will also resize images if new_height or new_width are not zero. optional uint32 new_height = 9 [default = 0]; optional uint32 new_width = 10 [default = 0]; } // Message that stores parameters InfogainLossLayer m
56、essage InfogainLossParameter { // Specify the infogain matrix source. optional string source = 1; } // Message that stores parameters used by InnerProductLayer message InnerProductParameter { optional uint32 num_output = 1; // The number of outputs for the layer optional bool bias_t
57、erm = 2 [default = true]; // whether to have bias terms optional FillerParameter weight_filler = 3; // The filler for the weight optional FillerParameter bias_filler = 4; // The filler for the bias } // Message that stores parameters used by LRNLayer message LRNParameter { optional uin
58、t32 local_size = 1 [default = 5]; optional float alpha = 2 [default = 1.]; optional float beta = 3 [default = 0.75]; enum NormRegion { ACROSS_CHANNELS = 0; WITHIN_CHANNEL = 1; } optional NormRegion norm_region = 4 [default = ACROSS_CHANNELS]; } // Message that stores par
59、ameters used by MemoryDataLayer message MemoryDataParameter { optional uint32 batch_size = 1; optional uint32 channels = 2; optional uint32 height = 3; optional uint32 width = 4; } // Message that stores parameters used by PoolingLayer message PoolingParameter { enum PoolMethod
60、{ MAX = 0; AVE = 1; STOCHASTIC = 2; } optional PoolMethod pool = 1 [default = MAX]; // The pooling method optional uint32 kernel_size = 2; // The kernel size optional uint32 stride = 3 [default = 1]; // The stride // The padding size -- currently implemented only for av
61、erage pooling. optional uint32 pad = 4 [default = 0]; } // Message that stores parameters used by PowerLayer message PowerParameter { // PowerLayer computes outputs y = (shift + scale * x) ^ power. optional float power = 1 [default = 1.0]; optional float scale = 2 [default = 1.0];
62、 optional float shift = 3 [default = 0.0]; } // Message that stores parameters used by WindowDataLayer message WindowDataParameter { // Specify the data source. optional string source = 1; // For data pre-processing, we can do simple scaling and subtracting the // data mean, if pro
63、vided. Note that the mean subtraction is always carried // out before scaling. optional float scale = 2 [default = 1]; optional string mean_file = 3; // Specify the batch size. optional uint32 batch_size = 4; // Specify if we would like to randomly crop an image. optional uint32
64、 crop_size = 5 [default = 0]; // Specify if we want to randomly mirror data. optional bool mirror = 6 [default = false]; // Foreground (object) overlap threshold optional float fg_threshold = 7 [default = 0.5]; // Background (non-object) overlap threshold optional float bg_threshol
65、d = 8 [default = 0.5]; // Fraction of batch that should be foreground objects optional float fg_fraction = 9 [default = 0.25]; // Amount of contextual padding to add around a window // (used only by the window_data_layer) optional uint32 context_pad = 10 [default = 0]; // Mode for
66、cropping out a detection window // warp: cropped window is warped to a fixed size and aspect ratio // square: the tightest square around the window is cropped optional string crop_mode = 11 [default = "warp"]; } // DEPRECATED: V0LayerParameter is the old way of specifying layer parameters // in Caffe. We keep this message type around for legacy support. message V0LayerParameter { optional string name = 1; // the layer name optional string type = 2; // the string to specify
- 溫馨提示:
1: 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
2: 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
3.本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
5. 裝配圖網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 市教育局冬季運(yùn)動(dòng)會(huì)安全工作預(yù)案
- 2024年秋季《思想道德與法治》大作業(yè)及答案3套試卷
- 2024年教師年度考核表個(gè)人工作總結(jié)(可編輯)
- 2024年xx村兩委涉案資金退還保證書(shū)
- 2024年憲法宣傳周活動(dòng)總結(jié)+在機(jī)關(guān)“弘揚(yáng)憲法精神推動(dòng)發(fā)改工作高質(zhì)量發(fā)展”專題宣講報(bào)告會(huì)上的講話
- 2024年XX村合作社年報(bào)總結(jié)
- 2024-2025年秋季第一學(xué)期初中歷史上冊(cè)教研組工作總結(jié)
- 2024年小學(xué)高級(jí)教師年終工作總結(jié)匯報(bào)
- 2024-2025年秋季第一學(xué)期初中物理上冊(cè)教研組工作總結(jié)
- 2024年xx鎮(zhèn)交通年度總結(jié)
- 2024-2025年秋季第一學(xué)期小學(xué)語(yǔ)文教師工作總結(jié)
- 2024年XX村陳規(guī)陋習(xí)整治報(bào)告
- 2025年學(xué)校元旦迎新盛典活動(dòng)策劃方案
- 2024年學(xué)校周邊安全隱患自查報(bào)告
- 2024年XX鎮(zhèn)農(nóng)村規(guī)劃管控述職報(bào)告