日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

UFLDL教程:Exercise:Vectorization

發(fā)布時(shí)間:2023/12/13 编程问答 43 豆豆
生活随笔 收集整理的這篇文章主要介紹了 UFLDL教程:Exercise:Vectorization 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

載入數(shù)據(jù)并顯示


Deep Learning and Unsupervised Feature Learning Tutorial Solutions

下載MINIST數(shù)據(jù)集及加載數(shù)據(jù)集的函數(shù)。MINIST數(shù)據(jù)集的介紹。

% Change the filenames if you've saved the files under different names % On some platforms, the files might be saved as % train-images.idx3-ubyte / train-labels.idx1-ubyte images = loadMNISTImages('train-images.idx3-ubyte'); labels = loadMNISTLabels('train-labels.idx1-ubyte');% We are using display_network from the autoencoder code display_network(images(:,1:100)); % Show the first 100 images disp(labels(1:10));

修改train.m中的初始參數(shù)


visibleSize = 28*28; % number of input units 輸入層單元數(shù) hiddenSize = 196; % number of hidden units隱藏層單元數(shù) sparsityParam = 0.1; % desired average activation of the hidden units.稀疏值% (This was denoted by the Greek alphabet rho, which looks like a lower-case "p",% in the lecture notes). lambda = 3e-3; % weight decay parameter 權(quán)重衰減系數(shù) beta = 3; % weight of sparsity penalty term稀疏值懲罰項(xiàng)的權(quán)重

修改訓(xùn)練集,把step1里面的patches的產(chǎn)生改為


%% STEP 1: Implement sampleIMAGES 第1步:實(shí)現(xiàn)圖片采樣 %實(shí)現(xiàn)圖片采樣后,函數(shù)display_network從訓(xùn)練集中隨機(jī)顯示200張 % After implementing sampleIMAGES, the display_network command should % display a random sample of 200 patches from the dataset %從10000張中隨機(jī)選擇200張顯示 % patches = sampleIMAGES; % figure % display_network(patches(:,randi(size(patches,2),200,1)),8) % title('sampleIMAGES') %%為產(chǎn)生一個(gè)200維的列向量,每一維的值為0~10000中的隨機(jī)數(shù),說明是隨機(jī)取200個(gè)patch來顯示images = loadMNISTImages('train-images.idx3-ubyte'); labels = loadMNISTLabels('train-labels.idx1-ubyte');% We are using display_network from the autoencoder code figure display_network(images(:,1:100)); % Show the first 100 images title('first100images') figure disp(labels(1:10)); title('first100label')patches=zeros(visibleSize,10000);for i=1:10000patches(:,i)=reshape(images(:,i),visibleSize,1); end% patches = normalizeData(patches); % 在實(shí)現(xiàn)手寫字符識(shí)別時(shí),是不需要對(duì)其做歸一化處理的 %要清楚的是,MINIST數(shù)據(jù)集本身就已經(jīng)對(duì)數(shù)據(jù)進(jìn)行了歸一化的處理% Obtain random parameters theta 初始化參數(shù)向量theta theta = initializeParameters(hiddenSize, visibleSize);

為了提高效率,可把train.m中的 STEP 3: Gradient Checking這步注釋掉,因?yàn)樵诒纠杏?xùn)練集更大,梯度檢查會(huì)比較慢。


參考文獻(xiàn)


https://github.com/jiandanjinxin/matlab_code-ufldl-exercise-/tree/master/vectorization_exercise

深度學(xué)習(xí)入門教程UFLDL學(xué)習(xí)實(shí)驗(yàn)筆記二:使用向量化對(duì)MNIST數(shù)據(jù)集做稀疏自編碼

Deep Learning 2_深度學(xué)習(xí)UFLDL教程:矢量化編程(斯坦福大學(xué)深度學(xué)習(xí)教程)

吳恩達(dá) Andrew Ng 的公開課

總結(jié)

以上是生活随笔為你收集整理的UFLDL教程:Exercise:Vectorization的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。