UFLDL教程:Exercise:Vectorization
載入數(shù)據(jù)并顯示
Deep Learning and Unsupervised Feature Learning Tutorial Solutions
下載MINIST數(shù)據(jù)集及加載數(shù)據(jù)集的函數(shù)。MINIST數(shù)據(jù)集的介紹。
% Change the filenames if you've saved the files under different names % On some platforms, the files might be saved as % train-images.idx3-ubyte / train-labels.idx1-ubyte images = loadMNISTImages('train-images.idx3-ubyte'); labels = loadMNISTLabels('train-labels.idx1-ubyte');% We are using display_network from the autoencoder code display_network(images(:,1:100)); % Show the first 100 images disp(labels(1:10));修改train.m中的初始參數(shù)
visibleSize = 28*28; % number of input units 輸入層單元數(shù) hiddenSize = 196; % number of hidden units隱藏層單元數(shù) sparsityParam = 0.1; % desired average activation of the hidden units.稀疏值% (This was denoted by the Greek alphabet rho, which looks like a lower-case "p",% in the lecture notes). lambda = 3e-3; % weight decay parameter 權(quán)重衰減系數(shù) beta = 3; % weight of sparsity penalty term稀疏值懲罰項(xiàng)的權(quán)重
修改訓(xùn)練集,把step1里面的patches的產(chǎn)生改為
%% STEP 1: Implement sampleIMAGES 第1步:實(shí)現(xiàn)圖片采樣 %實(shí)現(xiàn)圖片采樣后,函數(shù)display_network從訓(xùn)練集中隨機(jī)顯示200張 % After implementing sampleIMAGES, the display_network command should % display a random sample of 200 patches from the dataset %從10000張中隨機(jī)選擇200張顯示 % patches = sampleIMAGES; % figure % display_network(patches(:,randi(size(patches,2),200,1)),8) % title('sampleIMAGES') %%為產(chǎn)生一個(gè)200維的列向量,每一維的值為0~10000中的隨機(jī)數(shù),說明是隨機(jī)取200個(gè)patch來顯示images = loadMNISTImages('train-images.idx3-ubyte'); labels = loadMNISTLabels('train-labels.idx1-ubyte');% We are using display_network from the autoencoder code figure display_network(images(:,1:100)); % Show the first 100 images title('first100images') figure disp(labels(1:10)); title('first100label')patches=zeros(visibleSize,10000);for i=1:10000patches(:,i)=reshape(images(:,i),visibleSize,1); end% patches = normalizeData(patches); % 在實(shí)現(xiàn)手寫字符識(shí)別時(shí),是不需要對(duì)其做歸一化處理的 %要清楚的是,MINIST數(shù)據(jù)集本身就已經(jīng)對(duì)數(shù)據(jù)進(jìn)行了歸一化的處理% Obtain random parameters theta 初始化參數(shù)向量theta theta = initializeParameters(hiddenSize, visibleSize);
為了提高效率,可把train.m中的 STEP 3: Gradient Checking這步注釋掉,因?yàn)樵诒纠杏?xùn)練集更大,梯度檢查會(huì)比較慢。
參考文獻(xiàn)
https://github.com/jiandanjinxin/matlab_code-ufldl-exercise-/tree/master/vectorization_exercise
深度學(xué)習(xí)入門教程UFLDL學(xué)習(xí)實(shí)驗(yàn)筆記二:使用向量化對(duì)MNIST數(shù)據(jù)集做稀疏自編碼
Deep Learning 2_深度學(xué)習(xí)UFLDL教程:矢量化編程(斯坦福大學(xué)深度學(xué)習(xí)教程)
吳恩達(dá) Andrew Ng 的公開課
總結(jié)
以上是生活随笔為你收集整理的UFLDL教程:Exercise:Vectorization的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 农行信用币有额度拒绝是怎么回事?申请被拒
- 下一篇: UFLDL教程:Exercise:PCA