日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

【DeepLearning】Exercise:Learning color features with Sparse Autoencoders

發(fā)布時(shí)間:2023/12/1 编程问答 30 豆豆
生活随笔 收集整理的這篇文章主要介紹了 【DeepLearning】Exercise:Learning color features with Sparse Autoencoders 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

Exercise:Learning color features with Sparse Autoencoders

習(xí)題鏈接:Exercise:Learning color features with Sparse Autoencoders

?

sparseAutoencoderLinearCost.m

function [cost,grad,features] = sparseAutoencoderLinearCost(theta, visibleSize, hiddenSize, ...lambda, sparsityParam, beta, data) % -------------------- YOUR CODE HERE -------------------- % Instructions: % Copy sparseAutoencoderCost in sparseAutoencoderCost.m from your % earlier exercise onto this file, renaming the function to % sparseAutoencoderLinearCost, and changing the autoencoder to use a % linear decoder. % -------------------- YOUR CODE HERE -------------------- % W1 is a hiddenSize * visibleSize matrix W1 = reshape(theta(1:hiddenSize*visibleSize), hiddenSize, visibleSize); % W2 is a visibleSize * hiddenSize matrix W2 = reshape(theta(hiddenSize*visibleSize+1:2*hiddenSize*visibleSize), visibleSize, hiddenSize); % b1 is a hiddenSize * 1 vector b1 = theta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize); % b2 is a visible * 1 vector b2 = theta(2*hiddenSize*visibleSize+hiddenSize+1:end);numCases = size(data, 2);% forward propagation z2 = W1 * data + repmat(b1, 1, numCases); a2 = sigmoid(z2); z3 = W2 * a2 + repmat(b2, 1, numCases); a3 = z3;% error sqrerror = (data - a3) .* (data - a3); error = sum(sum(sqrerror)) / (2 * numCases); % weight decay wtdecay = (sum(sum(W1 .* W1)) + sum(sum(W2 .* W2))) / 2; % sparsity rho = sum(a2, 2) ./ numCases; divergence = sparsityParam .* log(sparsityParam ./ rho) + (1 - sparsityParam) .* log((1 - sparsityParam) ./ (1 - rho)); sparsity = sum(divergence);cost = error + lambda * wtdecay + beta * sparsity;% delta3 is a visibleSize * numCases matrix delta3 = -(data - a3); % delta2 is a hiddenSize * numCases matrix sparsityterm = beta * (-sparsityParam ./ rho + (1-sparsityParam) ./ (1-rho)); delta2 = (W2' * delta3 + repmat(sparsityterm, 1, numCases)) .* sigmoiddiff(z2); W1grad = delta2 * data' ./ numCases + lambda * W1; b1grad = sum(delta2, 2) ./ numCases;W2grad = delta3 * a2' ./ numCases + lambda * W2; b2grad = sum(delta3, 2) ./ numCases;%------------------------------------------------------------------- % After computing the cost and gradient, we will convert the gradients back % to a vector format (suitable for minFunc). Specifically, we will unroll % your gradient matrices into a vector.grad = [W1grad(:) ; W2grad(:) ; b1grad(:) ; b2grad(:)];endfunction sigm = sigmoid(x)sigm = 1 ./ (1 + exp(-x)); endfunction sigmdiff = sigmoiddiff(x)sigmdiff = sigmoid(x) .* (1 - sigmoid(x)); end

?

?

如果跑出來是這樣的,可能是把a(bǔ)3 = z3寫成了a3 = sigmoid(z3)

轉(zhuǎn)載于:https://www.cnblogs.com/ganganloveu/p/4218111.html

總結(jié)

以上是生活随笔為你收集整理的【DeepLearning】Exercise:Learning color features with Sparse Autoencoders的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。