HDF 文件结构包括一个file id(文件号)、至少一个 data descriptor (数据描述符)、没有或多个 data element(数据内容)数据内容。

file id (文件号)是一个 32 比特的值,最多占用 HDF 文件的头4 个字节。通过读取这个值,应用程序就知道此文件是否是一个HDF 文件

Data descriptor block(数据块描述符)包含一个数据描述符数值。所有的数据描述符都是12 字节长,包含 4 个域,即一个 16比特长的标签,一个 16比特的引用字,一个 32比特的数据偏移量和一个32 比特的数据长度。

tag(标记)是数据描述符域,表示存于相应数据内容的数据类型。例如 306 是光栅图像对象的识别符。

Reference number(引用号)是一个 16 比特无符号整型数。HDF 文件中的每一个对象,由HDF库和数据描述符中的标签确定一个唯一的引用字。在引用字确定的数据对象期间,标签和引用字不能改变。标签和引用字的结合可以唯一确定文件中对应的数据对象。

引用字没有必要连续指定,因此,在一个具有相同标签区分对象的方法后,不能假设引用字的值有任何意义。有时应用程序员也会发现在他们的程序中把一些另外的信息加到引用字中是很方便的,但必须强调的是,HDF 库本身并不识别这些含义。

Data offset field(数据偏移量)是一个 32 比特无符号整型字。通过存储文件开始时的字节数和数据内容开始时的字节数,指明文件中数据内容的位置。

Length field (长度域)是一个 32 比特无符号整型字。它表示整个数据内容的字节大小。数据内容增加,其长度也要增加。

Data element(数据成分)是数据对象的原始数据部分,包含每个象素的值。

最简单hdf5的matlab读写操作

testHDF5.m

%% WRITING TO HDF5
filename='trial.h5';num_total_samples=10000;
% to simulate data being read from disk / generated etc.
data_disk=rand(5,5,1,num_total_samples);
label_disk=rand(10,num_total_samples); chunksz=100;
created_flag=false;
totalct=0;
for batchno=1:num_total_samples/chunkszfprintf('batch no. %d\n', batchno);last_read=(batchno-1)*chunksz;% to simulate maximum data to be held in memory before dumping to hdf5 file batchdata=data_disk(:,:,1,last_read+1:last_read+chunksz); batchlabs=label_disk(:,last_read+1:last_read+chunksz);% store to hdf5startloc=struct('dat',[1,1,1,totalct+1], 'lab', [1,totalct+1]);curr_dat_sz=store2hdf5(filename, batchdata, batchlabs, ~created_flag, startloc, chunksz); created_flag=true;% flag set so that file is created only oncetotalct=curr_dat_sz(end);% updated dataset size (#samples)
end% display structure of the stored HDF5 file
h5disp(filename);%% READING FROM HDF5% Read data and labels for samples #1000 to 1999
data_rd=h5read(filename, '/data', [1 1 1 1000], [5, 5, 1, 1000]);
label_rd=h5read(filename, '/label', [1 1000], [10, 1000]);
fprintf('Testing ...\n');
try assert(isequal(data_rd, single(data_disk(:,:,:,1000:1999))), 'Data do not match');assert(isequal(label_rd, single(label_disk(:,1000:1999))), 'Labels do not match');fprintf('Success!\n');
catch errfprintf('Test failed ...\n');getReport(err)
end%delete(filename);% CREATE list.txt containing filename, to be used as source for HDF5_DATA_LAYER
FILE=fopen('list.txt', 'w');
fprintf(FILE, '%s', filename);
fclose(FILE);
fprintf('HDF5 filename listed in %s \n', 'list.txt');% NOTE: In net definition prototxt, use list.txt as input to HDF5_DATA as:
% layers {
%   name: "data"
%   type: HDF5_DATA
%   top: "data"
%   top: "labelvec"
%   hdf5_data_param {
%     source: "/path/to/list.txt"
%     batch_size: 64
%   }
% }

store2hdf5.m

function [curr_dat_sz, curr_lab_sz] = store2hdf5(filename, data, labels, create, startloc, chunksz)  % *data* is W*H*C*N matrix of images should be normalized (e.g. to lie between 0 and 1) beforehand% *label* is D*N matrix of labels (D labels per sample) % *create* [0/1] specifies whether to create file newly or to append to previously created file, useful to store information in batches when a dataset is too big to be held in memory  (default: 1)% *startloc* (point at which to start writing data). By default, % if create=1 (create mode), startloc.data=[1 1 1 1], and startloc.lab=[1 1]; % if create=0 (append mode), startloc.data=[1 1 1 K+1], and startloc.lab = [1 K+1]; where K is the current number of samples stored in the HDF% chunksz (used only in create mode), specifies number of samples to be stored per chunk (see HDF5 documentation on chunking) for creating HDF5 files with unbounded maximum size - TLDR; higher chunk sizes allow faster read-write operations % verify that format is rightdat_dims=size(data);lab_dims=size(labels);num_samples=dat_dims(end);assert(lab_dims(end)==num_samples, 'Number of samples should be matched between data and labels');if ~exist('create','var')create=true;endif create%fprintf('Creating dataset with %d samples\n', num_samples);if ~exist('chunksz', 'var')chunksz=1000;endif exist(filename, 'file')fprintf('Warning: replacing existing file %s \n', filename);delete(filename);end      h5create(filename, '/data', [dat_dims(1:end-1) Inf], 'Datatype', 'single', 'ChunkSize', [dat_dims(1:end-1) chunksz]); % width, height, channels, number h5create(filename, '/label', [lab_dims(1:end-1) Inf], 'Datatype', 'single', 'ChunkSize', [lab_dims(1:end-1) chunksz]); % width, height, channels, number if ~exist('startloc','var') startloc.dat=[ones(1,length(dat_dims)-1), 1];startloc.lab=[ones(1,length(lab_dims)-1), 1];end else  % append modeif ~exist('startloc','var')info=h5info(filename);prev_dat_sz=info.Datasets(1).Dataspace.Size;prev_lab_sz=info.Datasets(2).Dataspace.Size;assert(prev_dat_sz(1:end-1)==dat_dims(1:end-1), 'Data dimensions must match existing dimensions in dataset');assert(prev_lab_sz(1:end-1)==lab_dims(1:end-1), 'Label dimensions must match existing dimensions in dataset');startloc.dat=[ones(1,length(dat_dims)-1), prev_dat_sz(end)+1];startloc.lab=[ones(1,length(lab_dims)-1), prev_lab_sz(end)+1];endendif ~isempty(data)h5write(filename, '/data', single(data), startloc.dat, size(data));h5write(filename, '/label', single(labels), startloc.lab, size(labels));  endif nargoutinfo=h5info(filename);curr_dat_sz=info.Datasets(1).Dataspace.Size;curr_lab_sz=info.Datasets(2).Dataspace.Size;end
end

Python解析HDF文件

#xlwt这个包是写HDF文件的
import math
import pandas as pd
import xlwt
with closing(pd.HDFStore(HDF_FILR_URL)) as store:df = store[date]
# index shoule be end -> region -> group
df.reset_index(inplace=True)
df.set_index(["end", "region", "group"], inplace=True)
df.sort_index(inplace=True)
#获取到数据之后就是pandas提供的函数,获取自己需要的数据
slice_df = df.loc[dt]
rtt = slice_df.rtt.unstack(level=0) / 1000
cwnd = slice_df.cwnd.unstack(level=0)
total = slice_df.total.unstack(level=0)
rows = rtt.index.tolist()
columns = rtt.columns.tolist()
#最后写入Excel
def writexcel(listname, name, time):#将数据写入Excelsaveurl = EXCEL_FILR_URL + '%s_%s_%s.xls' % (AVG_RTT, time, name)excel_file = xlwt.Workbook()table = excel_file.add_sheet('tcpinfo')index_row = 0for item in listname:for item_key, item_value in item.items():table.write(index_row, 0, str(item_key))table.write(index_row, 1, str(item_value[1][0]))table.write(index_row, 2, str(item_value[1][1]))table.write(index_row, 3, str(item_value[0]).decode('utf-8'))index_row += 1excel_file.save(saveurl)

先看一个mnist的hdf5的输入例子:

loadMNISTImages.m

function images = loadMNISTImages(filename)
%loadMNISTImages returns a 28x28x[number of MNIST images] matrix containing
%the raw MNIST imagesfp = fopen(filename, 'rb');
assert(fp ~= -1, ['Could not open ', filename, '']);magic = fread(fp, 1, 'int32', 0, 'ieee-be');
assert(magic == 2051, ['Bad magic number in ', filename, '']);numImages = fread(fp, 1, 'int32', 0, 'ieee-be');
numRows = fread(fp, 1, 'int32', 0, 'ieee-be');
numCols = fread(fp, 1, 'int32', 0, 'ieee-be');images = fread(fp, inf, 'unsigned char');
images = reshape(images, numCols, numRows, numImages);
images = permute(images,[2 1 3]);fclose(fp);% Reshape to #pixels x #examples
images = reshape(images, size(images, 1) * size(images, 2), size(images, 3));
% Convert to double and rescale to [0,1]
images = double(images) / 255;end

loadMNISTLabels.m

function labels = loadMNISTLabels(filename)
%loadMNISTLabels returns a [number of MNIST images]x1 matrix containing
%the labels for the MNIST imagesfp = fopen(filename, 'rb');
assert(fp ~= -1, ['Could not open ', filename, '']);magic = fread(fp, 1, 'int32', 0, 'ieee-be');
assert(magic == 2049, ['Bad magic number in ', filename, '']);numLabels = fread(fp, 1, 'int32', 0, 'ieee-be');labels = fread(fp, inf, 'unsigned char');assert(size(labels,1) == numLabels, 'Mismatch in label count');fclose(fp);end

mnist2hdf5.m

clc
close all
clear all
%%addpath mnistHelper;
addpath datasets;% train-images.idx3-ubyte / train-labels.idx1-ubyte
images = loadMNISTImages('train-images-idx3-ubyte');
labels = loadMNISTLabels('train-labels-idx1-ubyte');% reshape images to 4-D: [rows,col,channel,numbers]
trainData=reshape(images,[28 28 1 size(images,2)]);% permute to [cols,rows,channel,numbers]
trainData=permute(trainData,[2 1 3 4]);% permute lables to [labels, number of labels ]
trainLabels=permute(labels,[2,1]);h5create('train.hdf5','/data',size(trainData),'Datatype','double');
h5create('train.hdf5','/label',size(trainLabels),'Datatype','double');h5write('train.hdf5','/data',trainData);
h5write('train.hdf5','/label',trainLabels);%%
% test images
images = loadMNISTImages('t10k-images-idx3-ubyte');
labels = loadMNISTLabels('t10k-labels-idx1-ubyte');% reshape images to 4-D: [rows,col,channel,numbers]
testData=reshape(images,[28 28 1 size(images,2)]);% permute to [cols,rows,channel,numbers]
testData=permute(testData,[2 1 3 4]);% permute lables to [labels, number of labels ]
testLabels=permute(labels,[2,1]);h5create('test.hdf5','/data',size(testData),'Datatype','double');
h5create('test.hdf5','/label',size(testLabels),'Datatype','double');h5write('test.hdf5','/data',testData);
h5write('test.hdf5','/label',testLabels);

lenet_train_test.prototxt

name: "LeNet"
layer {name: "mnist"type: "HDF5Data"top: "data"top: "label"include {phase: TRAIN}hdf5_data_param {source: "examples/mnist/mnist_train_hdf/train_loc.txt"batch_size: 64}
}
layer {name: "mnist"type: "HDF5Data"top: "data"top: "label"include {phase: TEST}hdf5_data_param {source: "examples/mnist/mnist_test_hdf/test_loc.txt"batch_size: 100}
}
layer {name: "conv1"type: "Convolution"bottom: "data"top: "conv1"param {lr_mult: 1}param {lr_mult: 2}convolution_param {num_output: 20kernel_size: 5stride: 1weight_filler {type: "xavier"}bias_filler {type: "constant"}}
}
layer {name: "pool1"type: "Pooling"bottom: "conv1"top: "pool1"pooling_param {pool: MAXkernel_size: 2stride: 2}
}
layer {name: "conv2"type: "Convolution"bottom: "pool1"top: "conv2"param {lr_mult: 1}param {lr_mult: 2}convolution_param {num_output: 50kernel_size: 5stride: 1weight_filler {type: "xavier"}bias_filler {type: "constant"}}
}
layer {name: "pool2"type: "Pooling"bottom: "conv2"top: "pool2"pooling_param {pool: MAXkernel_size: 2stride: 2}
}
layer {name: "ip1"type: "InnerProduct"bottom: "pool2"top: "ip1"param {lr_mult: 1}param {lr_mult: 2}inner_product_param {num_output: 500weight_filler {type: "xavier"}bias_filler {type: "constant"}}
}
layer {name: "relu1"type: "ReLU"bottom: "ip1"top: "ip1"
}
layer {name: "ip2"type: "InnerProduct"bottom: "ip1"top: "ip2"param {lr_mult: 1}param {lr_mult: 2}inner_product_param {num_output: 10weight_filler {type: "xavier"}bias_filler {type: "constant"}}
}
layer {name: "accuracy"type: "Accuracy"bottom: "ip2"bottom: "label"top: "accuracy"include {phase: TEST}
}
layer {name: "loss"type: "SoftmaxWithLoss"bottom: "ip2"bottom: "label"top: "loss"
}

由于可能存在多个H5文件,所以HDF5Data的输入是从一个TXT文件读取的列表,train_loc.txt内容示例如下:

train1.h5
train2.h5

利用python生成供给Caffe的hdf5数据

Requirements:
sudo pip install pydot
sudo apt-get install -y graphvizInteresting resources on Caffe:- https://github.com/BVLC/caffe/tree/master/examples- http://nbviewer.ipython.org/github/joyofdata/joyofdata-articles/blob/master/deeplearning-with-caffe/Neural-Networks-with-Caffe-on-the-GPU.ipynbInteresting resources on Iris with ANNs:- iris data set test bed: http://deeplearning4j.org/iris-flower-dataset-tutorial.html- http://se.mathworks.com/help/nnet/examples/iris-clustering.html- http://lab.fs.uni-lj.si/lasin/wp/IMIT_files/neural/doc/seminar8.pdfSynonyms:- output = label = target- input = feature
'''import subprocess
import platform
import copyfrom sklearn.datasets import load_iris
import sklearn.metrics
import numpy as np
from sklearn.cross_validation import StratifiedShuffleSplit
import matplotlib.pyplot as plt
import h5py
import caffe
import caffe.drawdef load_data():'''Load Iris Data set'''data = load_iris()print(data.data)print(data.target)targets = np.zeros((len(data.target), 3))for count, target in enumerate(data.target):targets[count][target]= 1    print(targets)new_data = {}#new_data['input'] = data.datanew_data['input'] = np.reshape(data.data, (150,1,1,4))new_data['output'] = targets#print(new_data['input'].shape)#new_data['input'] = np.random.random((150, 1, 1, 4))#print(new_data['input'].shape)   #new_data['output'] = np.random.random_integers(0, 1, size=(150,3))    #print(new_data['input'])return new_datadef save_data_as_hdf5(hdf5_data_filename, data):'''HDF5 is one of the data formats Caffe accepts'''with h5py.File(hdf5_data_filename, 'w') as f:f['data'] = data['input'].astype(np.float32)f['label'] = data['output'].astype(np.float32)def main():'''This is the main function'''# Set parameterssolver_prototxt_filename = 'iris_solver.prototxt'train_test_prototxt_filename = 'iris_train_test.prototxt'deploy_prototxt_filename  = 'iris_deploy.prototxt'deploy_prototxt_filename  = 'iris_deploy.prototxt'deploy_prototxt_batch2_filename  = 'iris_deploy_batchsize2.prototxt'hdf5_train_data_filename = 'iris_train_data.hdf5' hdf5_test_data_filename = 'iris_test_data.hdf5' caffemodel_filename = 'iris__iter_1000000.caffemodel' # generated by train()# Prepare datadata = load_data()print(data)train_data = datatest_data = datasave_data_as_hdf5(hdf5_train_data_filename, data)save_data_as_hdf5(hdf5_test_data_filename, data)# Train networktrain(solver_prototxt_filename)# Get predicted outputsinput = np.array([[ 5.1,  3.5,  1.4,  0.2]])print(get_predicted_output(deploy_prototxt_filename, caffemodel_filename, input))input = np.array([[[[ 5.1,  3.5,  1.4,  0.2]]],[[[ 5.9,  3. ,  5.1,  1.8]]]])#print(get_predicted_output(deploy_prototxt_batch2_filename, caffemodel_filename, input))# Print networkprint_network(deploy_prototxt_filename, caffemodel_filename)print_network(train_test_prototxt_filename, caffemodel_filename)print_network_weights(train_test_prototxt_filename, caffemodel_filename)# Compute performance metrics#inputs = input = np.array([[[[ 5.1,  3.5,  1.4,  0.2]]],[[[ 5.9,  3. ,  5.1,  1.8]]]])inputs = data['input']outputs = get_predicted_outputs(deploy_prototxt_filename, caffemodel_filename, inputs)get_accuracy(data['output'], outputs)if __name__ == "__main__":main()#cProfile.run('main()') # if you want to do some profiling

这个脚本是一个MIT的学生写的,鸢尾花卉数据集得训练,我后面补一下这个例子,先挖个坑。

可以参考

http://blog.csdn.net/shadow_guo/article/details/50382446

https://github.com/Franck-Dernoncourt/caffe_demos

http://fuel.readthedocs.io/en/latest/new_dataset.html

接下来看看多标签回归的hdf5的输入:

python实现

import random
from PIL import Image
import numpy as npimport h5py
IMAGE_DIR = ['image_train', 'image_test']
HDF5_FILE = ['hdf5_train.h5', 'hdf5_test.h5']
LIST_FILE = ['list_train.txt', 'list_test.txt']
LABELS = dict(# (kind_1, kind_2)A_0 = (0, 0),B_0 = (1, 0),A_1 = (0, 1),B_1 = (1, 1),A_2 = (0, 2),B_2 = (1, 2),
)
print '\nplease wait...'for kk, image_dir in enumerate(IMAGE_DIR):# 读取文件列表于file_listfile_list = ...# 文件列表乱序random.shuffle(file_list)# 标签类别kind_index = ...# 图片大小为96*32,单通道datas = np.zeros((len(file_list), 1, 32, 96))# label大小为1*2labels = np.zeros((len(file_list), 2))for ii, _file in enumerate(file_list):# hdf5文件要求数据是float或者double格式# 同时caffe中Hdf5DataLayer不允许使用transform_param,# 所以要手动除以256datas[ii, :, :, :] = \np.array(Image.open(_file)).astype(np.float32) / 256labels[ii, :] = np.array(LABELS[kind_index ]).astype(np.int)# 写入hdf5文件with h5py.File(HDF5_FILE[kk], 'w') as f:f['data'] = datasf['labels'] = labelsf.close()# 写入列表文件,可以有多个hdf5文件with open(LIST_FILE[kk], 'w') as f:f.write(os.path.abspath(HDF5_FILE[kk]) + '\n')f.close()print '\ndone...'

一个生成hdf5并转为LMDB的python脚本:

import lmdb
import random
import osimport caffedef convert_data_lmdb(train_data,train_label,output_data_lmdb,output_labels_lmdb):"""Used for save data and multi-labels to lmdbscall: convert_data_lmdb(train_X,train_y,'train_data_lmdb','train_labels_lmdb')"""X = train_data.astype(np.float)y = train_label.astype(np.float)X, y = shuffle(X, y, random_state=42)  # shuffle train data# creating images lmdbin_db = lmdb.open(output_data_lmdb, map_size=X.nbytes*10)with in_db.begin(write=True) as in_txn :for in_idx,in_ in enumerate(X) :im = in_;im = im[:,:,::-1]im = im.transpose((2, 0, 1))im_dat = caffe.io.array_to_datum(im)#in_txn.put(in_idx.encode('ascii'), im_dat.SerializeToString())in_txn.put('{:0>10d}'.format(in_idx), im_dat.SerializeToString())in_db.close()in_label = lmdb.open(output_labels_lmdb, map_size=y.nbytes*10)counter_label = 0with in_label.begin(write=True) as in_txn :for idx in range(y.shape[0]):datum = caffe.io.array_to_datum(y[np.newaxis,np.newaxis,idx])in_txn.put("{:0>10d}".format(counter_label), datum.SerializeToString())counter_label += 1in_label.close()def write_hdf5(filename):import h5pyIMAGE_SIZE = (96, 96)LABEL_SIZE = 30 # Multi-labelsMEAN_VALUE = 128#filename = sys.argv[1]setname, ext = filename.split('.')with open(filename, 'r') as f:lines = f.readlines()np.random.shuffle(lines)sample_size = len(lines)imgs = np.zeros((sample_size, 1,) + IMAGE_SIZE, dtype=np.float32)scores = np.zeros((sample_size,1) + LABEL_SIZE, dtype=np.float32)h5_filename = '{}.h5'.format(setname)with h5py.File(h5_filename, 'w') as h:for i, line in enumerate(lines):image_name, score = line[:-1].split()img = pyplot.imread(image_name)[:, :, 0].astype(np.float32)img = img.reshape((1, )+img.shape)#img -= MEAN_VALUEimgs[i] = imgscores[i,1] = float(score)if (i+1) % 1000 == 0:print('processed {} images!'.format(i+1))h.create_dataset('data', data=imgs)h.create_dataset('label', data=scores)with open('{}_h5.txt'.format(setname), 'w') as f:f.write(h5_filename)def write_hdf5(data,labels,output_filename):"""This function is used to save image data and its label(s) to hdf5 file.output_file.h5,contain data and labeldata.shape is (n,c,h,w)label.shape is (n,labels)"""import h5pyX = data.astype(np.float32)y = labels.astype(np.float32)X, y = shuffle(X, y, random_state=42)  # shuffle train dataIMAGE_SIZE = (96, 96)LABEL_SIZE = 30 # Multi-labelsMEAN_VALUE = 128#filename = sys.argv[1]setname, ext = output_filename.split('.')sample_size = X.shape[0]imgs = np.zeros((sample_size, 1,) + IMAGE_SIZE, dtype=np.float32)scores = np.zeros((sample_size,LABEL_SIZE), dtype=np.float32)h5_filename = '{}.h5'.format(setname)with h5py.File(h5_filename, 'w') as h:i = 0;for in_,label in zip(X,y) :im = in_;im = im[:,:,::-1]im = im.transpose((2, 0, 1))imgs[i] = imscores[i] = labeli = i + 1;print('processed {} images!'.format(i))h.create_dataset('data', data=imgs)h.create_dataset('label', data=scores)with open('{}_h5.txt'.format(setname), 'w') as f:f.write(h5_filename)

篇幅有限,多标签的例子也挖一个坑,哈哈
=============================================================================

参考:

http://blog.csdn.net/u011762313/article/details/48830561

http://blog.csdn.net/eagelangel/article/details/51811519

http://blog.csdn.net/kuaitoukid/article/details/43448517

https://github.com/mravendi/caffe-mnist-hdf5/tree/master/MATLAB_HDF5_Converter

http://www.hust.site/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0/2016-04-26/225.html

http://blog.csdn.net/shuzfan?viewmode=contents

caffe中HDF5层及数据生成相关推荐

  1. caffe中各层的作用

    caffe中各层的作用: 关于caffe中的solver: cafffe中的sover的方法都有: Stochastic Gradient Descent (type: "SGD" ...

  2. R语言使用edit函数在Rsudio中生成数据编辑器(在windows中生成编辑器)、在编辑器中输出需要的数据生成最终的dataframe

    R语言使用edit函数在Rsudio中生成数据编辑器(在windows中生成编辑器).在编辑器中输出需要的数据生成最终的dataframe 目录

  3. Caffe中BN层与CONV层的融合(merge_bn)

    半年前写的博客,今天发现没有发出去,还好本地有md的文档,决定重新发一下 毕竟网上来回抄袭的blog太多了,代码质量也莫得保证 今天需要用pytorch融合下bn层,写个脚本稍后再传到csdn上 原理 ...

  4. 在caffe 中添加Crowd counting 数据层

    #目录 [TOC] 一.简介 Crowd counting一般以人群图像作为输入,网络回归该图像对应的密度图.以往做法是先在matlab中根据图像的label(人头位置)生成密度图,然后将输入图像及密 ...

  5. caffe中各个层——解析

    转自:http://www.cnblogs.com/denny402/p/5071126.html 所有的层都具有的参数,如name, type, bottom, top和transform_para ...

  6. MVC5中Model层开发数据注解

    ASP.NET MVC5中Model层开发,使用的数据注解有三个作用: 数据映射(把Model层的类用EntityFramework映射成对应的表) 数据验证(在服务器端和客户端验证数据的有效性) 数 ...

  7. caffe中batchnormal层的param参数lr_mult和decay_mult都为0的原因

    如下截取一部分train.prototxt的片段: layer {   name: "conv2_em/bn"   type: "BatchNorm"   bo ...

  8. caffe中卷积层的实现

    在caffe中卷基层是通过矩阵相乘来实现的实现,直接计算卷积可以由下面的程序表示: 可以看到,这段程序循环嵌套多,时间复杂度高,而使用矩阵相乘的形式:很多现成的矩阵运算库,不重复"造轮子&q ...

  9. Caffe中Convolution层

    层类型:Convolution  lr_mult: 学习率的系数,最终的学习率是这个数乘以solver.prototxt配置文件中的base_lr.如果有两个lr_mult, 则第一个表示权值的学习率 ...

最新文章

  1. 在oracle的安全策略中,Oracle数据库的安全策略(转)
  2. 五年磨砺:微软Vista开发过程全记录
  3. 【计算理论】计算复杂性 ( coNP 问题 | coNP 完全 | P、NP、coNP 相互关系 )
  4. 具体数学第二版第三章习题(2)
  5. pytorch 查看当前学习率_pytorch调整模型训练的学习率
  6. VeeValidate在vue项目里表单校验应用案例
  7. redis.conf 常见配置介绍
  8. bug--Unable to add window –token is not valid; is your activity running?
  9. 设计原则 —— 针对接口编程而不针对实现编程
  10. SpringBoot中静态资源配置
  11. IDEA技巧:如何根据注释生成swagger注解
  12. Android 仿火萤视频桌面 神奇的LiveWallPaper
  13. cadence导入dxf文件_PCB原创|cadence allegro导入DXF文件操作步骤
  14. Unirech-最新腾讯云国际版注册流程-无需绑定paypal
  15. 基于Python的中国影业数据分析
  16. 董淳光SQLITE3使用总结
  17. SEBASTIEN KWOK 2022春夏系列发布
  18. 七大IT业界顶尖认证证书
  19. anconda 下的pip路径_修改conda环境和缓存默认路径/修改Python 的 pip install 默认安装依赖路径/提高conda安装速度,其实就是改了下载源。...
  20. 记一次计算机课作文,关于记一次课堂游戏作文(精选5篇)

热门文章

  1. manjaro-i3wm 新装系统没有声音问题解决
  2. 谷歌宣布Android设备累计激活量突破10亿台
  3. 【git】统计每个人的代码行数
  4. DDD 聚合根 限界上下文
  5. ctfshow-命令执行-web38
  6. PTA L3-008 喊山 (BFS)
  7. java毕业设计—— 基于java+javaEE+jsp的项目管理系统设计与实现(毕业论文+程序源码)——项目管理系统
  8. 2.4.2 死锁的处理策略-预防死锁(破坏互斥条件、破坏不可剥夺条件、破坏请求和保持条件、破坏循环等待条件)
  9. might和could的区别用法_情态动词may与might用法
  10. Python学习笔记--10.Django框架快速入门之后台管理admin(书籍管理系统)