Caffe查看每一层学习出来的pattern
Filter visualization
http://www.cnblogs.com/dupuleng/articles/4244877.html
這一節(jié)參考http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/filter_visualization.ipynb,主要介紹如何顯示每一層的參數(shù)及輸出,這一部分非常重要,因?yàn)樵谏疃葘W(xué)習(xí)中我們關(guān)注的就是它學(xué)習(xí)出來(lái)的到底是什么東西
1、導(dǎo)入相關(guān)模塊以及設(shè)置畫(huà)圖參數(shù)
import numpy as np import matplotlib.pyplot as plt# Make sure that caffe is on the python path: caffe_root = '../' # this file is expected to be in {caffe_root}/examples,建議使用絕對(duì)路徑 import sys sys.path.insert(0, caffe_root + 'python')import caffeplt.rcParams['figure.figsize'] = (10, 10) plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray'2、獲取分類器并設(shè)定相關(guān)參數(shù)
?通過(guò)下面命令獲取訓(xùn)練模型
./scripts/download_model_binary.py models/bvlc_reference_caffenet caffe.set_phase_test() caffe.set_mode_cpu() net = caffe.Classifier(caffe_root + 'models/bvlc_reference_caffenet/deploy.prototxt',caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel') # input preprocessing: 'data' is the name of the input blob == net.inputs[0] net.set_mean('data', np.load(caffe_root + 'python/caffe/imagenet/ilsvrc_2012_mean.npy')) # ImageNet mean net.set_raw_scale('data', 255) # 像素值范圍[0,255] net.set_channel_swap('data', (2,1,0)) # 訓(xùn)練模型是BGR而不是RGB,所以將測(cè)試圖片轉(zhuǎn)為BGR格式3、預(yù)測(cè)
scores = net.predict([caffe.io.load_image(caffe_root + 'examples/images/cat.jpg')])4、每一層的特征及大小
[(k, v.data.shape) for k, v in net.blobs.items()] [('data', (10, 3, 227, 227)),('conv1', (10, 96, 55, 55)),('pool1', (10, 96, 27, 27)),('norm1', (10, 96, 27, 27)),('conv2', (10, 256, 27, 27)),('pool2', (10, 256, 13, 13)),('norm2', (10, 256, 13, 13)),('conv3', (10, 384, 13, 13)),('conv4', (10, 384, 13, 13)),('conv5', (10, 256, 13, 13)),('pool5', (10, 256, 6, 6)),('fc6', (10, 4096, 1, 1)),('fc7', (10, 4096, 1, 1)),('fc8', (10, 1000, 1, 1)),('prob', (10, 1000, 1, 1))]以('data', (10, 3, 227, 227))為例,‘data'表示層的名字,10表示批處理數(shù)據(jù)大小,3表示特征圖的個(gè)數(shù),227,227分別表示特征圖的大小
5、每層參數(shù)及大小
[(k, v[0].data.shape) for k, v in net.params.items()] [('conv1', (96, 3, 11, 11)),('conv2', (256, 48, 5, 5)),('conv3', (384, 256, 3, 3)),('conv4', (384, 192, 3, 3)),('conv5', (256, 192, 3, 3)),('fc6', (1, 1, 4096, 9216)),('fc7', (1, 1, 4096, 4096)),('fc8', (1, 1, 1000, 4096))]以('conv1', (96, 3, 11, 11)為例,’conv1'表示層名,96表示濾波器個(gè)數(shù),(3,11,11)表示濾波器大小,3為上一層feature map的個(gè)數(shù),conv1的上一層是輸入為RGB三個(gè)通道,因?yàn)閒eature map的個(gè)數(shù)為3。但對(duì)于('conv2', (256, 48, 5, 5)),上一層為?('norm1', (10, 96, 27, 27)) feature map的個(gè)數(shù)為96,而48是92/2 , 所以不太清楚是怎么實(shí)現(xiàn)的,猜測(cè)是第二個(gè)卷積層只從norm1層中選擇一半進(jìn)行卷積,可能得去具體研究一下模型了。
6、輔助函數(shù):繪制特征圖
def vis_square(data, padsize=1, padval=0):data -= data.min()data /= data.max() # force the number of filters to be squaren = int(np.ceil(np.sqrt(data.shape[0])))padding = ((0, n ** 2 - data.shape[0]), (0, padsize), (0, padsize)) + ((0, 0),) * (data.ndim - 3)data = np.pad(data, padding, mode='constant', constant_values=(padval, padval))# tile the filters into an imagedata = data.reshape((n, n) + data.shape[1:]).transpose((0, 2, 1, 3) + tuple(range(4, data.ndim + 1)))data = data.reshape((n * data.shape[1], n * data.shape[3]) + data.shape[4:])plt.figure() #新的繪圖區(qū)plt.imshow(data)8、顯示輸入圖
plt.imshow(net.deprocess('data', net.blobs['data'].data[4]))
9、"conv1"權(quán)重圖
filters = net.params['conv1'][0].data vis_square(filters.transpose(0, 2, 3, 1)) # RGB轉(zhuǎn)GBR可以看到是彩色圖,因?yàn)槊總€(gè)濾波器有三個(gè)通道(3,10,10),總共96個(gè)。可以看到每個(gè)濾波器學(xué)到的是特征明顯的邊緣
10、顯示”conv1"輸出
feat = net.blobs['conv1'].data[4, :36] vis_square(feat, padval=1)“conv1"的輸出有256個(gè)feature map,這里只顯示前36個(gè),當(dāng)然你也可以選擇全部顯示
12、可視化”conv2"的權(quán)重,“conv2"包含256個(gè)大小為 5*5*48的濾波器,這里只顯示一部分
48**48 即 48*48。其實(shí)要觀察第二層到底學(xué)習(xí)到什么特征,需要考慮第一層的權(quán)重,因?yàn)檫@是一個(gè)級(jí)聯(lián)的過(guò)程,現(xiàn)在有一部分人已經(jīng)做了這方面的工作了。
filters = net.params['conv2'][0].data vis_square(filters[:48].reshape(48**2, 5, 5))
12、可視化”conv2"層的輸出,即feature map
feat = net.blobs['conv2'].data[4, :36] vis_square(feat, padval=1)
13、“conv3"層的feature map
feat = net.blobs['conv3'].data[4] vis_square(feat, padval=0.5)
14、”conv4"層feature map
feat = net.blobs['conv4'].data[4] vis_square(feat, padval=0.5)
同理可以觀察你想輸出的任意層的feature map
16、接下來(lái)看一下pooling層的影響?
下面是分別是"conv5" "pool5"的輸出,可以看出通過(guò)pooling層后,每一個(gè)feature map的可區(qū)分性更強(qiáng)了,這正是分類模型所期望的
17、”fc6" "fc7"是兩個(gè)全連接層,輸出大小為4096*1,”fc6"層的分布比較均勻區(qū)分性比較弱,而通過(guò)“fc7"層各輸出之間的可區(qū)分性增強(qiáng)
18、“prob"層即預(yù)測(cè)層,預(yù)測(cè)該樣本屬于每一類的概率,ImageNet數(shù)據(jù)庫(kù)有1000類,那么該層輸出為1000*1
19、輸出top 5的分類?
# load labels imagenet_labels_filename = caffe_root + 'data/ilsvrc12/synset_words.txt' try:labels = np.loadtxt(imagenet_labels_filename, str, delimiter='\t') except:!../data/ilsvrc12/get_ilsvrc_aux.shlabels = np.loadtxt(imagenet_labels_filename, str, delimiter='\t')# sort top k predictions from softmax output top_k = net.blobs['prob'].data[4].flatten().argsort()[-1:-6:-1] print labels[top_k] ['n02123045 tabby, tabby cat' 'n02123159 tiger cat''n02124075 Egyptian cat' 'n02119022 red fox, Vulpes vulpes''n02127052 lynx, catamount']總結(jié)
以上是生活随笔為你收集整理的Caffe查看每一层学习出来的pattern的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 红豆豆浆的功效与作用
- 下一篇: Caffe Blob Dtype理解