日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

机器学习week8 ex7 review

發布時間:2025/3/15 编程问答 16 豆豆
生活随笔 收集整理的這篇文章主要介紹了 机器学习week8 ex7 review 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

機器學習week8 ex7 review

這周學習K-means,并將其運用于圖片壓縮。


1 K-means clustering

先從二維的點開始,使用K-means進行分類。

1.1 Implement K-means


K-means步驟如上,在每次循環中,先對所有點更新分類,再更新每一類的中心坐標。

1.1.1 Finding closest centroids

對每個example,根據公式:

找到距離它最近的centroid,并標記。若有數個距離相同且均為最近,任取一個即可。
代碼如下:

function idx = findClosestCentroids(X, centroids) %FINDCLOSESTCENTROIDS computes the centroid memberships for every example % idx = FINDCLOSESTCENTROIDS (X, centroids) returns the closest centroids % in idx for a dataset X where each row is a single example. idx = m x 1 % vector of centroid assignments (i.e. each entry in range [1..K]) %% Set K K = size(centroids, 1);% You need to return the following variables correctly. idx = zeros(size(X,1), 1);% ====================== YOUR CODE HERE ====================== % Instructions: Go over every example, find its closest centroid, and store % the index inside idx at the appropriate location. % Concretely, idx(i) should contain the index of the centroid % closest to example i. Hence, it should be a value in the % range 1..K % % Note: You can use a for-loop over the examples to compute this. %for i = 1:size(X,1)dist = pdist([X(i,:);centroids])(:,1:K);[row, col] = find(dist == min(dist));idx(i) = col(1); end;

1.1.2 Compute centroid means

對每個centroid,根據公式:

求出該類所有點的平均值(即中心點)進行更新。
代碼如下:

function centroids = computeCentroids(X, idx, K) %COMPUTECENTROIDS returns the new centroids by computing the means of the %data points assigned to each centroid. % centroids = COMPUTECENTROIDS(X, idx, K) returns the new centroids by % computing the means of the data points assigned to each centroid. It is % given a dataset X where each row is a single data point, a vector % idx of centroid assignments (i.e. each entry in range [1..K]) for each % example, and K, the number of centroids. You should return a matrix % centroids, where each row of centroids is the mean of the data points % assigned to it. %% Useful variables [m n] = size(X);% You need to return the following variables correctly. centroids = zeros(K, n);% ====================== YOUR CODE HERE ====================== % Instructions: Go over every centroid and compute mean of all points that % belong to it. Concretely, the row vector centroids(i, :) % should contain the mean of the data points assigned to % centroid i. % % Note: You can use a for-loop over the centroids to compute this. %for i = 1:Kcentroids(i,:) = mean(X(find(idx == i),:)); end;% =============================================================end

1.2 K-means on example dataset

ex7.m中提供了一個例子,其中中 K 已經被手動初始化過了。

% Settings for running K-Means K = 3; max_iters = 10;% For consistency, here we set centroids to specific values % but in practice you want to generate them automatically, such as by % settings them to be random examples (as can be seen in % kMeansInitCentroids). initial_centroids = [3 3; 6 2; 8 5];

如上,我們要把點分成三類,迭代次數為10次。三類的中心點初始化為.
得到如下圖像。(中間的圖像略去,只展示開始和完成時的圖像)
這是初始圖像:

進行10次迭代后的圖像:

可以看到三堆點被很好地分成了三類。圖片上同時也展示了中心點的移動軌跡。

1.3 Random initialization

ex7.m中為了方便檢驗結果正確性,給定了K的初始化。而實際應用中,我們需要隨機初始化
完成如下代碼:

function centroids = kMeansInitCentroids(X, K) %KMEANSINITCENTROIDS This function initializes K centroids that are to be %used in K-Means on the dataset X % centroids = KMEANSINITCENTROIDS(X, K) returns K initial centroids to be % used with the K-Means on the dataset X %% You should return this values correctly centroids = zeros(K, size(X, 2));% ====================== YOUR CODE HERE ====================== % Instructions: You should set centroids to randomly chosen examples from % the dataset X %% Initialize the centroids to be random examples % Randomly reorder the indices of examples randidx = randperm(size(X, 1)); % Take the first K examples as centroids centroids = X(randidx(1:K), :);% =============================================================end

這樣初始的中心點就是從X中隨機選擇的K個點。

1.4 Image compression with K-means

K-means進行圖片壓縮。
用一張的圖片為例,采用RGB,總共需要個bit。
這里我們對他進行壓縮,把所有顏色分成16類,以其centroid對應的顏色代替整個一類中的顏色,可以將空間壓縮至 個bit。
用題目中提供的例子,效果大概如下:

1.5 (Ungraded)Use your own image

隨便找一張本地圖片,先用PS調整大小,最好在 以下(否則速度會很慢),運行,效果如下:


2 Principal component analysis

我們使用PCA來減少向量維數。

2.1 Example dataset

先對例子中的二維向量實現降低到一維。
繪制散點圖如下:

2.2 Implementing PCA

首先需要計算數據的協方差矩陣(covariance matrix)
然后使用 Octave/MATLAB中的SVD函數計算特征向量(eigenvector)

可以先對數據進行normalizationfeature scaling的處理。
協方差矩陣如下計算:

然后用SVD函數求特征向量
故完成pca.m如下:

function [U, S] = pca(X) %PCA Run principal component analysis on the dataset X % [U, S, X] = pca(X) computes eigenvectors of the covariance matrix of X % Returns the eigenvectors U, the eigenvalues (on diagonal) in S %% Useful values [m, n] = size(X);% You need to return the following variables correctly. U = zeros(n); S = zeros(n);% ====================== YOUR CODE HERE ====================== % Instructions: You should first compute the covariance matrix. Then, you % should use the "svd" function to compute the eigenvectors % and eigenvalues of the covariance matrix. % % Note: When computing the covariance matrix, remember to divide by m (the % number of examples). %[U,S,V] = svd(1/m * X' * X);% =========================================================================end

把求出的特征向量繪制在圖上:

2.3 Dimensionality reduction with PCA

將高維的examples投影到低維上。

2.3.1 Projecting the data onto the principal components

完成projectData.m如下:

function Z = projectData(X, U, K) %PROJECTDATA Computes the reduced data representation when projecting only %on to the top k eigenvectors % Z = projectData(X, U, K) computes the projection of % the normalized inputs X into the reduced dimensional space spanned by % the first K columns of U. It returns the projected examples in Z. %% You need to return the following variables correctly. Z = zeros(size(X, 1), K);% ====================== YOUR CODE HERE ====================== % Instructions: Compute the projection of the data using only the top K % eigenvectors in U (first K columns). % For the i-th example X(i,:), the projection on to the k-th % eigenvector is given as follows: % x = X(i, :)'; % projection_k = x' * U(:, k); %Ureduce = U(:,1:K); Z = X * Ureduce;% =============================================================end

X投影到K維空間上。

2.3.2 Reconstructing an approximation of the data

從投影過的低維恢復高維:

function X_rec = recoverData(Z, U, K) %RECOVERDATA Recovers an approximation of the original data when using the %projected data % X_rec = RECOVERDATA(Z, U, K) recovers an approximation the % original data that has been reduced to K dimensions. It returns the % approximate reconstruction in X_rec. %% You need to return the following variables correctly. X_rec = zeros(size(Z, 1), size(U, 1));% ====================== YOUR CODE HERE ====================== % Instructions: Compute the approximation of the data by projecting back % onto the original space using the top K eigenvectors in U. % % For the i-th example Z(i,:), the (approximate) % recovered data for dimension j is given as follows: % v = Z(i, :)'; % recovered_j = v' * U(j, 1:K)'; % % Notice that U(j, 1:K) is a row vector. % Ureduce = U(:, 1:K); X_rec = Z * Ureduce';% =============================================================end

2.3.3 Visualizing the projections


根據上圖可以看出,恢復后的圖只保留了其中一個特征向量上的信息,而垂直方向的信息丟失了。

2.4 Face image dataset

對人臉圖片進行dimension reduction。ex7faces.mat中存有大量人臉的灰度圖() , 因此每一個向量的維數是 。
如下是前一百張人臉圖:

2.4.1 PCA on faces

用PCA得到其主成分,將其重新轉化為 的矩陣后,對其可視化,如下:(只展示前36個)

2.4.2 Dimensionality reduction

取前100個特征向量進行投影,

可以看出,降低維度后,人臉部的大致框架還保留著,但是失去了一些細節。這給我們的啟發是,當我們在用神經網絡訓練人臉識別時,有時候可以用這種方式來提高速度。

2.5 Optional (Ungraded) exercise: PCA for visualization

PCA常用于高維向量的可視化。
如下圖,用K-means對三維空間上的點進行分類。

對圖片進行旋轉,可以看出這些點大致在一個平面上

因此我們使用PCA將其降低到二維,并觀察散點圖:

這樣就更利于觀察分類的情況了。

?

轉載于:https://www.cnblogs.com/EtoDemerzel/p/7881396.html

總結

以上是生活随笔為你收集整理的机器学习week8 ex7 review的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。