日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程语言 > c/c++ >内容正文

c/c++

【算法】07 AM-MCMC算法C++实现

發布時間:2023/12/9 c/c++ 39 豆豆
生活随笔 收集整理的這篇文章主要介紹了 【算法】07 AM-MCMC算法C++实现 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

目標

2022/4/17-2022/5/10

實現自適應的MCMC方法(Adaptive Metropolis Algorithm)

本地目錄:E:\Research\OptA\MCMC

如有問題,歡迎交流探討! 郵箱:lujiabo@hhu.edu.cn 盧家波
來信請說明博客標題及鏈接,謝謝。

MCMC簡介

MCMC方法是基于貝葉斯理論框架,通過建立平衡分布為π(x)\pi(x)π(x)的馬爾可夫鏈,并對其平衡分布進行采樣,通過不斷更新樣本信息而使馬爾可夫鏈能充分搜索模型參數空間,最終收斂于高概率密度區,因此,MCMC方法是對理想的貝葉斯推斷過程的一種近似。MCMC方法的關鍵是如何構造有效的推薦分布,確保按照推薦分布抽取的樣本收斂于高概率密度區。

MCMC方法論為建立實際的統計模型提供了一種有效工具,能將一些復雜的高維問題轉化為一系列簡單的低維問題,因此適用于復雜統計模型的貝葉斯計算。目前,在貝葉斯分析中應用最為廣泛的MCMC方法主要基于Gibbs采樣方法和Metropolis-Hastings 方法[2,8-9]。在此基礎上,后人對這些方法開展了進一步的改進和推廣研究,Haario等人(2001)[1]提出了自適應的MCMC方法(Adaptive Metropolis Algorithm),其主要特點是將推薦分布定義為參數空間多維正態分布,在進化過程中自適應地調整協方差矩陣,從而大大提高算法的收斂速度。

模型描述

線性模型

利用AM-MCMC估計線性回歸參數,模型如下:

y=kx+by=kx+by=kx+b

參數

進行AM-MCMC不確定性估計的2個參數為 kkkbbb

參數名含義真值下限上限
kkk斜率2-33
bbb截距-1-33

實測值

實測值取-5到5之間的整數對應的函數值,每個點增加正態隨機數N(0, 0.16)擾動,即

y=?11,?9,?7,?5,?3,?1,1,3,5,7,9,x∈[?5,5]→y=-11,-9,-7,-5, -3, -1, 1, 3, 5, 7, 9, x\in[-5,5]\rightarrowy=?11,?9,?7,?5,?3,?1,1,3,5,7,9,x[?5,5]

y=?10.79,?9.4,?6.77,?5.3,?3.52,?0.97,1.3,3.13,5.16,7.56,8.8,x∈[?5,5]y=-10.79, -9.4, -6.77, -5.3, -3.52, -0.97, 1.3, 3.13, 5.16, 7.56, 8.8,x\in[-5,5]y=?10.79,?9.4,?6.77,?5.3,?3.52,?0.97,1.3,3.13,5.16,7.56,8.8,x[?5,5]

AM-MCMC參數設置

參數初始協方差矩陣C0C_0C0?為對角矩陣,方差取參數取值范圍的1/201/201/20;初始迭代次數t0=100t_0=100t0?=100。算法平行運行5次,每次采樣5000個樣本。

模擬次數參數個數并行抽樣次數置信水平%熱身期
50002590500

似然函數

采用BOX&Tiao給出的似然函數,參考[5]p.64(4-14)

p(θt∣y)∝[∑i=1Ne(θt)i2]?12Np(\theta^{t}|y) \propto [\sum_{i=1}^{N}e(\theta^{t})_i^2]^{-\frac{1}{2}N}p(θty)[i=1N?e(θt)i2?]?21?N

分析結果

收斂判斷

針對單序列是否穩定,可采用平均值法和方差法, 考察迭代過程中的參數平均值和方差是否穩定[3]。針對多序列是否收斂,采用Gelman[7]在1992年提出的收斂診斷指標R\sqrt{R}R?——比例縮小得分,計算每一參數的比例縮小得分R\sqrt{R}R?, 若接近于1表示參數收斂到了后驗分布上。Gelman和Rubin建議將R<1.2\sqrt{R}<1.2R?<1.2作為多序列抽樣算法收斂判斷條件。

單序列

單一采樣序列的平均值和方差的變化過程圖。從圖可看出,當t>500t>500t>500后,參數kkkbbb的平均值和方差基本達到穩定, 因此單一序列是收斂的。

斜率k


截距b

多序列

R\sqrt{R}R?變化過程,在迭代初期,R\sqrt{R}R?變化劇烈;當t>200t>200t>200后,R\sqrt{R}R?趨于穩定,并接近于1.0,說明參數kkkbbb的MCMC采樣序列均能穩定收斂到其參數的后驗分布,且算法全局收斂。綜合考慮上述單序列和多序列參數的收斂情況, 將每一序列的前500次舍去, 這樣5次平行試驗共采集了22500個樣本用于參數后驗分布的統計分析。

斜率k

截距b

參數后驗分布

根據上述MCMC抽樣得到參數kkkbbb的后驗分布。由圖可知,bbbkkk具有更大的不確定性。

斜率k

截距b

置信區間

將似然函數歸一化權重根據抽樣結果的升序排列,計算5%、50%、95%置信度所在模擬結果,繪制如下圖。大多數實測數據都包含于90%的置信區間內。

結論

與Metropolis-Hastings算法相比,Adaptive Metropolis(AM)算法不需要事先確定推薦分布;與Gibbs采樣相比,AM不需要導出條件分布,對于復雜的水文模型,通常很難導出單個參數的條件分布;與GLUE方法相比,AM基于貝葉斯理論框架,是對理想貝葉斯推斷過程的一種近似,而GLUE主要通過隨機抽樣估計參數不確定性。

參考文獻

[1]Haario, H., Saksman, E., & Tamminen, J. (2001). An Adaptive Metropolis Algorithm. Bernoulli, 7(2), 223–242. https://doi.org/10.2307/3318737
[2]Kuczera G, Parent E. (1998). Monte Carlo assessment of parameter uncertainty in conceptual catchment models: the Metropolis algorithm. Journal of Hydrology, 211(1), 69-85. https://doi.org/10.1016/S0022-1694(98)00198-X
[3]梁忠民,戴榮,綦晶. 基于MCMC的水文模型參數不確定性及其對預報的影響分析[C]. 環境變化與水安全——第五屆中國水論壇論文集.,2007:126-130.
[4]李向陽. 水文模型參數優選及不確定性分析方法研究[D].大連理工大學,2006.
[5]曹飛鳳. 基于MCMC方法的概念性流域水文模型參數優選及不確定性研究[D].浙江大學,2010.
[6]Thiemann M, Trosset M, Gupta H, Sorooshian S. (2001). Bayesian recursive parameter estimation for hydrologic models. Water Resources Research, 37(10), 2521-2535. https://doi.org/10.1029/2000WR900405
[7]Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472. https://doi.org/10.1214/ss/1177011136
[8]W. K. Hastings, Monte Carlo sampling methods using Markov chains and their applications, Biometrika, Volume 57, Issue 1, April 1970, Pages 97–109, https://doi.org/10.1093/biomet/57.1.97
[9]MetropoUs, N., Rosenbluth, A., Rosenbluth, M., Teller, A., & Teller, E. (1953). Equation of state calculations by fast computing machines. Journal of Chem. Physics, 21, 1087-1092.https://doi.org/10.1063/1.1699114
[10]Armadillo. C++ library for linear algebra & scientific computing. http://arma.sourceforge.net/
[11]Conrad Sanderson and Ryan Curtin. Armadillo: a template-based C++ library for linear algebra. Journal of Open Source Software, Vol. 1, pp. 26, 2016. http://dx.doi.org/10.21105/joss.00026
[12]Conrad Sanderson and Ryan Curtin. An Adaptive Solver for Systems of Linear Equations. International Conference on Signal Processing and Communication Systems, pp.?1-6, 2020.
[13]Matplotlib for C++. https://github.com/lava/matplotlib-cpp

源碼

AM為單次抽樣程序,PAM為平行抽樣程序,繼承于AM類。由于高度耦合,因此AM類成員都設置為公開public。main.cpp為主函數,負責創建PAM類,調用參數估計Estimate()函數。參數的上下限、初始值、實測值、算法參數在AM類的Initialize()函數中設置。平行抽樣、置信度在PAM類的構造函數中設置。

程序中用到了Matplotlib-cpp[13],用于數據可視化,使用方法參見【C++】11 Visual Studio 2019 C++安裝matplotlib-cpp繪圖。

還用到了線性代數庫Armadillo[10-12],用于多元正態分布抽樣,使用方法參見【C++】13 多元正態分布抽樣。

AM.h

#pragma once/****************************************************************************** 文件名: AM.h 作者: 盧家波 單位:河海大學水文水資源學院 郵箱:lujiabo@hhu.edu.cn QQ:1847096852 版本:2022.5 創建 V1.0 版權: MIT 引用格式:盧家波,AM-MCMC算法C++實現. 南京:河海大學,2022.LU Jiabo, Adaptive Metropolis algorithm of Markov Chain Monte Carlo in C++. Nanjing:Hohai University, 2022. 參考文獻:[1]Haario, H., Saksman, E., & Tamminen, J. (2001). An Adaptive Metropolis Algorithm. Bernoulli, 7(2), 223–242. https://doi.org/10.2307/3318737[2]Kuczera G, Parent E. (1998). Monte Carlo assessment of parameter uncertainty in conceptual catchment models: the Metropolis algorithm. Journal of Hydrology, 211(1), 69-85. https://doi.org/10.1016/S0022-1694(98)00198-X[3]梁忠民,戴榮,綦晶. 基于MCMC的水文模型參數不確定性及其對預報的影響分析[C]. 環境變化與水安全——第五屆中國水論壇論文集.,2007:126-130.[4]李向陽. 水文模型參數優選及不確定性分析方法研究[D].大連理工大學,2006.[5]曹飛鳳. 基于MCMC方法的概念性流域水文模型參數優選及不確定性研究[D].浙江大學,2010.[6]Thiemann M, Trosset M, Gupta H, Sorooshian S. (2001). Bayesian recursive parameter estimation for hydrologic models. Water Resources Research, 37(10), 2521-2535. https://doi.org/10.1029/2000WR900405[7]Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472. https://doi.org/10.1214/ss/1177011136[8]W. K. Hastings, Monte Carlo sampling methods using Markov chains and their applications, Biometrika, Volume 57, Issue 1, April 1970, Pages 97–109, https://doi.org/10.1093/biomet/57.1.97[9]MetropoUs, N., Rosenbluth, A., Rosenbluth, M., Teller, A., & Teller, E. (1953). Equation of state calculations by fast computing machines. Journal of Chem. Physics, 21, 1087-1092.https://doi.org/10.1063/1.1699114[10]Armadillo. C++ library for linear algebra & scientific computing. http://arma.sourceforge.net/[11]Conrad Sanderson and Ryan Curtin. Armadillo: a template-based C++ library for linear algebra. Journal of Open Source Software, Vol. 1, pp. 26, 2016. http://dx.doi.org/10.21105/joss.00026[12]Conrad Sanderson and Ryan Curtin. An Adaptive Solver for Systems of Linear Equations. International Conference on Signal Processing and Communication Systems, pp.?1-6, 2020.[13]Matplotlib for C++. https://github.com/lava/matplotlib-cpp******************************************************************************/#include <vector> #include <armadillo> //線性代數運算庫class AM { public://AM-MCMC不確定性估計函數virtual void Estimate();//===============1.輸入估計參數及實測值===============void Setup();//====================2.運行模型====================void Run();//private://===================3.不確定性估計===================void Analysis();//===================4.輸出估計結果===================void Save();//1.初始化,t=0void Initialize();//2.計算協方差矩陣Ctvoid CalculateCovariance();//3.從推薦分布抽樣獲得候選點void Sample();//4.計算接受概率alphavoid CalAcceptanceProbability();//5.產生[0, 1]隨機數uvoid GenerateRandomNumber();//6.判斷是否接受候選點void IfAcceptCandidatePoint();//7.重復2-6直到產生足夠的樣本為止void Repeat();//計算經驗協方差矩陣void CalEmpiricalCovariance();//協方差遞推公式void RecurseCovariance();//候選點參數檢查bool CheckBound();//目標概率密度函數,取對數double logPDF();//調用待分析模型,得到模擬值void CallModel();//計算似然函數值void CalculateLikeliFun();//計算納什系數void CalculateNSE();//計算均方誤差void CalculateMSE();//計算BOX&Tiao給出的似然函數,參考[5]p.64(4-14)void CalculateBOX();//收斂判斷指標void CheckConvergence();//參數后驗分布頻率直方圖virtual void PlotPostDistribution();//繪制置信區間virtual void PlotConfidenceInterval();int dimension; //參數的空間維度int t0; //初始迭代次數int iterationNumber; //迭代總次數int currentIterNum; //當前迭代次數int burnInNumber; //熱身次數double sd; //比例因子,僅取決于參數的空間維度double epsilon; //為一個較小的數,以確保Ci不為奇異矩陣double alpha; //候選點接受概率double logDensity; //目標分布的對數密度,$\pi(x)$double candidatePointLogDensity; //候選點的對數密度double randomNumber; //隨機數//實測值std::vector<double> observedData;//每次待分析模型運行得到的模擬值、模擬值數組std::vector<double> modelledData;std::vector< std::vector<double> > modelledDataArray;//似然函數值std::vector<double> NSE; //納什效率系數std::vector<double> MSE; //均方誤差std::vector<double> BOX; //BOX似然函數std::vector<std::string> paramName; //參數名arma::vec lowerBound; //參數下限arma::vec upperBound; //參數上限arma::vec candidatePoint; //候選點arma::vec currentMean; //前t次迭代樣本點的均值arma::vec previousMean; //前t-1次迭代樣本點的均值arma::vec sample; //當前樣本點arma::mat sampleGroup; //樣本點集合arma::mat sampleMean; //樣本點各次迭代均值arma::mat sampleVariance; //樣本點各次迭代方差arma::mat sampleGroupBurnIn; //除去熱身期的樣本arma::mat C0; //初始協方差矩陣arma::mat Ct; //第t次迭代協方差矩陣arma::mat Id; //單位矩陣arma::mat empiricalCovariance; //cov(X0, ..., Xt0),經驗協方差矩陣};

PAM.h

#pragma once//Parallel Adaptive Metropolis algorithm#include "AM.h" #include <armadillo> //線性代數運算庫class PAM : public AM { public:void Estimate() override;PAM();private:void ParallelRun();void CalculateSRF();void PlotSRF();void PlotPostDistribution() override;void PlotConfidenceInterval() override;void Plot();//計算似然函數值對應的歸一化權重void CalculateWeight();//經驗累積分布函數(CDF)的預測void ecdfPred();//按模擬值的升序對一組權重數組和對應的模擬值數組進行排序void sort(std::vector<double>& w, std::vector<double>& x);//計算權重累加之和std::vector<double> cumsum(const std::vector<double>& w);//計算各百分位數下的樣本索引值std::vector<double> CalPercentiles(const std::vector<double>& modValue);//通過置信水平計算百分位數void SetPercentile();int parallelSampleNumber; //并行抽樣次數int modelledDataSize; //模擬值數據大小double confidenceLevel; //置信水平std::vector<double> parallelBOXBurnIn; //并行抽樣除去熱身期的BOX似然函數std::vector<double> weight;std::vector< std::vector<double> > modelledDataArrayBurnIn; //去除熱身期模擬值數組//不確定性邊界預測std::vector<double> percentile; //置信區間的百分位數std::vector<double> ecdf; //經驗累積分布函數std::vector<double> sampLowerPtile; //低分位數std::vector<double> sampUpperPtile; //高分位數std::vector<double> sampMedian; //50%分位數arma::mat scaleReductionFactor; //比例縮小得分SRF$\sqrt{R}$arma::cube parallelSampleMean; //并行抽樣樣本點各次迭代均值arma::cube parallelSampleVariance; //并行抽樣各次迭代方差arma::cube parallelSampleGroupBurnIn; //并行抽樣除去熱身期的樣本};

AM.cpp

/****************************************************************************** 文件名: AM.cpp 作者: 盧家波 單位:河海大學水文水資源學院 郵箱:lujiabo@hhu.edu.cn QQ:1847096852 版本:2022.5 創建 V1.0 版權: MIT 引用格式:盧家波,AM-MCMC算法C++實現. 南京:河海大學,2022.LU Jiabo, Adaptive Metropolis algorithm of Markov Chain Monte Carlo in C++. Nanjing:Hohai University, 2022. 參考文獻:[1]Haario, H., Saksman, E., & Tamminen, J. (2001). An Adaptive Metropolis Algorithm. Bernoulli, 7(2), 223–242. https://doi.org/10.2307/3318737[2]Kuczera G, Parent E. (1998). Monte Carlo assessment of parameter uncertainty in conceptual catchment models: the Metropolis algorithm. Journal of Hydrology, 211(1), 69-85. https://doi.org/10.1016/S0022-1694(98)00198-X[3]梁忠民,戴榮,綦晶. 基于MCMC的水文模型參數不確定性及其對預報的影響分析[C]. 環境變化與水安全——第五屆中國水論壇論文集.,2007:126-130.[4]李向陽. 水文模型參數優選及不確定性分析方法研究[D].大連理工大學,2006.[5]曹飛鳳. 基于MCMC方法的概念性流域水文模型參數優選及不確定性研究[D].浙江大學,2010.[6]Thiemann M, Trosset M, Gupta H, Sorooshian S. (2001). Bayesian recursive parameter estimation for hydrologic models. Water Resources Research, 37(10), 2521-2535. https://doi.org/10.1029/2000WR900405[7]Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical science, 7(4), 457-472. https://doi.org/10.1214/ss/1177011136[8]W. K. Hastings, Monte Carlo sampling methods using Markov chains and their applications, Biometrika, Volume 57, Issue 1, April 1970, Pages 97–109, https://doi.org/10.1093/biomet/57.1.97[9]MetropoUs, N., Rosenbluth, A., Rosenbluth, M., Teller, A., & Teller, E. (1953). Equation of state calculations by fast computing machines. Journal of Chem. Physics, 21, 1087-1092.https://doi.org/10.1063/1.1699114[10]Armadillo. C++ library for linear algebra & scientific computing. http://arma.sourceforge.net/[11]Conrad Sanderson and Ryan Curtin. Armadillo: a template-based C++ library for linear algebra. Journal of Open Source Software, Vol. 1, pp. 26, 2016. http://dx.doi.org/10.21105/joss.00026[12]Conrad Sanderson and Ryan Curtin. An Adaptive Solver for Systems of Linear Equations. International Conference on Signal Processing and Communication Systems, pp.?1-6, 2020.[13]Matplotlib for C++. https://github.com/lava/matplotlib-cpp******************************************************************************/#include "AM.h"#include <random> #include <numeric> #include <armadillo> //線性代數運算庫 #include "matplotlibcpp.h" //matplotlib的C++接口,用法見【C++】11 Visual Studio 2019 C++安裝matplotlib-cpp繪圖(https://blog.csdn.net/weixin_43012724/article/details/124051588) namespace plt = matplotlibcpp;double AM::logPDF() {CallModel();CalculateLikeliFun();//BOX效果最好,MSE次之,NSE最差//if (NSE.back() <= 0)//{// return log(1.0e-6);//}//else//{// return log(NSE.back());//}//return log(1.0 / MSE.back());return BOX.back();}void AM::CallModel() {//測試模型為線性模型 y = k * x + b, x in [-5, 5]//k 真值為 2,b 真值為 -1for (int x = -5; x <= 5; ++x){double temp = candidatePoint(0) * x + candidatePoint(1);modelledData.emplace_back(temp);}modelledDataArray.emplace_back(modelledData); }void AM::CalculateLikeliFun() {CalculateNSE();CalculateMSE();CalculateBOX();modelledData.clear(); }void AM::CalculateNSE() {double measuredValuesSum = std::accumulate(observedData.begin(), observedData.end(), 0.0);double measuredValuesAvg = measuredValuesSum / observedData.size();double numerator = 0.0;double denominator = 0.0;for (double val : observedData){denominator += pow(val - measuredValuesAvg, 2);}for (int index = 0; index < observedData.size(); ++index){numerator += pow(modelledData.at(index) - observedData.at(index), 2);}double nse = 1 - numerator / denominator;NSE.emplace_back(nse); }void AM::CalculateMSE() {double squareErrorSum = 0;int dataSize = observedData.size();for (int i = 0; i < dataSize; ++i){squareErrorSum += pow(modelledData.at(i) - observedData.at(i), 2);}double mse = squareErrorSum / static_cast<double>(dataSize);MSE.emplace_back(mse); }void AM::CalculateBOX() {double squareErrorSum = 0;int dataSize = observedData.size();for (int i = 0; i < dataSize; ++i){squareErrorSum += pow(modelledData.at(i) - observedData.at(i), 2);}double box = log(pow(squareErrorSum, -static_cast<double>(dataSize) / 2.0));BOX.emplace_back(box);}void AM::CheckConvergence() {for (int i = 0; i < dimension; ++i){arma::rowvec paramMeanVec = sampleMean.row(i); //參數均值std::vector<double> paramMean //將arma::vec轉化為std::vector<double>類型= arma::conv_to< std::vector<double> >::from(paramMeanVec); plt::plot(paramMean);plt::title("Change process diagram of the mean value of parameter " + paramName.at(i));plt::xlabel("Number of simulations");plt::ylabel("Mean of " + paramName.at(i));plt::save(paramName.at(i) + "_Mean" + ".png");plt::show();arma::rowvec paramVarianceVec = sampleVariance.row(i); //參數方差std::vector<double> paramVariance = arma::conv_to< std::vector<double> >::from(paramVarianceVec);plt::plot(paramVariance);plt::title("Change process diagram of the variance value of parameter " + paramName.at(i));plt::xlabel("Number of simulations");plt::ylabel("Variance of " + paramName.at(i));plt::save(paramName.at(i) + "_Variance" + ".png");plt::show();}}void AM::PlotPostDistribution() {sampleGroupBurnIn = sampleGroup.cols(burnInNumber, iterationNumber);for (int i = 0; i < dimension; ++i){arma::rowvec paramChainVec = sampleGroupBurnIn.row(i);std::vector<double> paramChain = arma::conv_to< std::vector<double> >::from(paramChainVec);plt::hist(paramChain);plt::show();}}void AM::PlotConfidenceInterval() { }void AM::Estimate() {Setup();Run();Analysis();Save(); }void AM::Setup() {Initialize(); }void AM::Run() {Repeat(); }void AM::Analysis() {CheckConvergence();PlotPostDistribution();PlotConfidenceInterval(); }void AM::Save() {sampleGroup.save("sampleGroup.csv", arma::csv_ascii);sampleGroupBurnIn.save("sampleGroupBurbIn.csv", arma::csv_ascii);sampleMean.save("sampleMean.csv", arma::csv_ascii); sampleVariance.save("sampleVariance.csv", arma::csv_ascii);std::ofstream fout("likelihood.csv");fout << "NSE" << "," << "MSE" << "," << "logBOX" << ",\n";int likelihoodSize = BOX.size();for (int i = 0; i < likelihoodSize; ++i){fout << NSE.at(i) << "," << MSE.at(i) << "," << BOX.at(i) << ",\n";}fout.close(); }void AM::Initialize() {dimension = 2;t0 = 100;iterationNumber = 5000;currentIterNum = 0;burnInNumber = 500;sd = 2.4 * 2.4 / dimension;epsilon = 1.0e-5;alpha = 0;candidatePointLogDensity = 0;randomNumber = 0;observedData = { -10.79, -9.4, -6.77, -5.3, -3.52, -0.97, 1.3, 3.13, 5.16, 7.56, 8.8 }; //每個點增加正態隨機數N(0, 0.16)擾動paramName = { "k", "b" };lowerBound = {-3, -3};upperBound = {3, 3 };candidatePoint.zeros(dimension);currentMean = { 1, 1 };previousMean.zeros(dimension);sample = {1, 1};logDensity = logPDF();sampleGroup.zeros(dimension, iterationNumber + 1);sampleGroup.col(0) = sample;sampleMean.zeros(dimension, iterationNumber + 1);sampleMean.col(0) = currentMean;sampleVariance.zeros(dimension, iterationNumber + 1);C0.zeros(dimension, dimension);C0.diag() = (upperBound - lowerBound) / 20.0;Ct = C0;Id.eye(dimension, dimension);empiricalCovariance.zeros(dimension, dimension); }void AM::CalculateCovariance() {if (currentIterNum == t0 + 1){CalEmpiricalCovariance();Ct = sd * empiricalCovariance + sd * epsilon * Id;}if (currentIterNum > t0 + 1){RecurseCovariance();} }void AM::Sample() {candidatePoint = arma::mvnrnd(sample, Ct); }void AM::CalAcceptanceProbability() {if (CheckBound()){candidatePointLogDensity = logPDF();double temp = candidatePointLogDensity - logDensity;alpha = std::min(1.0, exp(temp));}else{alpha = 0.0;modelledDataArray.emplace_back(modelledDataArray.back());NSE.emplace_back(NSE.back());MSE.emplace_back(MSE.back());BOX.emplace_back(BOX.back());}}void AM::GenerateRandomNumber() {std::random_device rd;std::mt19937 gen(rd());std::uniform_real_distribution<double> u(0.0, 1.0);randomNumber = u(gen); }void AM::IfAcceptCandidatePoint() {if (alpha >= randomNumber){sample = candidatePoint;logDensity = candidatePointLogDensity;}sampleGroup.col(currentIterNum) = sample;//更新方差,注意不要對整個sampleGroup求方差,因為包含初始化的0sampleVariance.col(currentIterNum) = arma::var(sampleGroup.cols(0, currentIterNum), 0, 1); //對矩陣每一行求方差, N-1//更新均值previousMean = currentMean;currentMean = (currentIterNum * previousMean + sample) / static_cast<double>(currentIterNum + 1);sampleMean.col(currentIterNum) = currentMean; }void AM::Repeat() {while (currentIterNum < iterationNumber){currentIterNum++;CalculateCovariance();Sample();CalAcceptanceProbability();GenerateRandomNumber();IfAcceptCandidatePoint();}sampleGroupBurnIn = sampleGroup.cols(burnInNumber, iterationNumber); }void AM::CalEmpiricalCovariance() {arma::mat tempSum(dimension, dimension, arma::fill::zeros);for (int col = 0; col < sampleGroup.n_cols; col ++){arma::vec x = sampleGroup.col(col);tempSum += x * x.t();}empiricalCovariance = (tempSum - (t0 + 1) * previousMean * previousMean.t())/ static_cast<double>(t0); }void AM::RecurseCovariance() {int t = currentIterNum;Ct = (static_cast<double>(t - 1) / static_cast<double>(t)) * Ct+ sd / static_cast<double>(t)* (t * previousMean * previousMean.t()- (t + 1) * currentMean * currentMean.t()+ sample * sample.t()+ epsilon * Id); }bool AM::CheckBound() {for (size_t i = 0; i < dimension; i++){if ((candidatePoint(i) < lowerBound(i)) || (candidatePoint(i) > upperBound(i))){return false;}}return true; }

PAM.cpp

#include "PAM.h"#include <algorithm> #include "matplotlibcpp.h" //matplotlib的C++接口,用法見【C++】11 Visual Studio 2019 C++安裝matplotlib-cpp繪圖(https://blog.csdn.net/weixin_43012724/article/details/124051588) namespace plt = matplotlibcpp;void PAM::Estimate() {ParallelRun();CheckConvergence();CalculateSRF();PlotSRF();PlotPostDistribution();PlotConfidenceInterval();Save(); }PAM::PAM() {Setup();confidenceLevel = 0.9;parallelSampleNumber = 5;scaleReductionFactor = arma::mat(dimension, iterationNumber, arma::fill::zeros); //比例縮小得分SRF$\sqrt{R}$parallelSampleMean = arma::cube(dimension, iterationNumber + 1, parallelSampleNumber); //并行抽樣樣本點各次迭代均值parallelSampleVariance = arma::cube(dimension, iterationNumber + 1, parallelSampleNumber); //并行抽樣各次迭代方差parallelSampleGroupBurnIn = arma::cube(dimension, iterationNumber - burnInNumber + 1, parallelSampleNumber); //并行抽樣除去熱身期的樣本 }void PAM::ParallelRun() {for (int i = 0; i < parallelSampleNumber; ++i){Setup();Run();parallelSampleMean.slice(i) = sampleMean;parallelSampleVariance.slice(i) = sampleVariance;parallelSampleGroupBurnIn.slice(i) = sampleGroupBurnIn;} }void PAM::CalculateSRF() {arma::mat tempMean(dimension, parallelSampleNumber, arma::fill::zeros);arma::mat tempVariance(dimension, parallelSampleNumber, arma::fill::zeros);for (int i = 1; i < iterationNumber + 1; ++i){for (int j = 0; j < parallelSampleNumber; ++j){tempMean.col(j) = parallelSampleMean.slice(j).col(i);tempVariance.col(j) = parallelSampleVariance.slice(j).col(i);}arma::vec B = arma::var(tempMean, 0, 1); //對矩陣每一行求方差, N-1arma::vec W = arma::mean(tempVariance, 1); //對矩陣每一行求均值scaleReductionFactor.col(i - 1) = arma::sqrt(static_cast<double>((i - 1)) / static_cast<double>(i)+ ((parallelSampleNumber + 1) / static_cast<double>(parallelSampleNumber))* B / W);}}void PAM::PlotSRF() {for (int i = 0; i < dimension; ++i){arma::rowvec paramSRFVec = scaleReductionFactor.row(i); //參數均值std::vector<double> SRFVec //將arma::vec轉化為std::vector<double>類型= arma::conv_to< std::vector<double> >::from(paramSRFVec);plt::plot(SRFVec);plt::title("Change process diagram of Scale Reduction Factor of parameter " + paramName.at(i));plt::xlabel("Number of simulations");plt::ylabel("Scale Reduction Factor $\\sqrt{R}$");plt::save(paramName.at(i) + "_Scale Reduction Factor" + ".png");plt::show();} }void PAM::PlotPostDistribution() {int size = iterationNumber - burnInNumber + 1;arma::mat tempSampleBurnIn = arma::mat(dimension, size * parallelSampleNumber, arma::fill::zeros);//去除熱身期的樣本for (int i = 0; i < parallelSampleNumber; ++i){tempSampleBurnIn.cols(i * size, (i + 1) * size - 1) = parallelSampleGroupBurnIn.slice(i);}for (int i = 0; i < dimension; ++i){arma::rowvec paramChainVec = tempSampleBurnIn.row(i);std::vector<double> paramChain= arma::conv_to< std::vector<double> >::from(paramChainVec);plt::hist(paramChain);plt::title("Posterior distribution histogram for parameter " + paramName.at(i));plt::xlabel(paramName.at(i));plt::ylabel("Frequency");plt::save(paramName.at(i) + "_Posterior distribution histogram" + ".png");plt::show();}}void PAM::PlotConfidenceInterval() {CalculateWeight();SetPercentile();ecdfPred();Plot(); }void PAM::Plot() {std::vector<double> xAxis = { -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5 };plt::plot(xAxis, sampLowerPtile);plt::plot(xAxis, sampUpperPtile);plt::plot(xAxis, sampMedian);plt::scatter(xAxis, observedData, 30, { {"color", "red"}, {"marker", "o"} });plt::title("Median and " + std::to_string(static_cast<int>(confidenceLevel * 100)) + "% AM-MCMC prediction limits with BOX");plt::xlabel("x");plt::ylabel("y");plt::save("ConfidenceInterval.png");plt::show(); }void PAM::CalculateWeight() {int size = iterationNumber - burnInNumber + 1;int sizeTotal = iterationNumber + 1;modelledDataArray.erase(modelledDataArray.begin());NSE.erase(NSE.begin());MSE.erase(MSE.begin());BOX.erase(BOX.begin()); //刪除因為PAM構造函數生成的第一行arma::vec BOXvec(BOX); //轉成arma的矢量arma::vec BOXBurnIn = arma::vec(size * parallelSampleNumber);for (int i = 0; i < parallelSampleNumber; ++i){BOXBurnIn.rows(i * size, (i + 1) * size - 1) =BOXvec.rows(burnInNumber + i * sizeTotal, iterationNumber + i * sizeTotal);}parallelBOXBurnIn= arma::conv_to< std::vector<double> >::from(BOXBurnIn);double parallelBOXBurnInSum = std::accumulate(parallelBOXBurnIn.begin(), parallelBOXBurnIn.end(), 0.0);for (auto box : parallelBOXBurnIn){double wgt = box / parallelBOXBurnInSum;weight.emplace_back(wgt);} }void PAM::ecdfPred() {int size = iterationNumber - burnInNumber + 1;int sizeTotal = iterationNumber + 1;modelledDataArrayBurnIn.assign(size * parallelSampleNumber, { 0 });for (int i = 0; i < parallelSampleNumber; ++i){std::copy(modelledDataArray.begin() + burnInNumber + i * sizeTotal, modelledDataArray.begin() + (i + 1) * sizeTotal, modelledDataArrayBurnIn.begin() + i * size);}modelledDataSize = modelledDataArray[0].size();for (size_t i = 0; i < modelledDataSize; ++i){std::vector<double> modValue; //某時刻所有的模擬值,不包含熱身期for (auto modData : modelledDataArrayBurnIn){modValue.emplace_back(modData.at(i));}//按模擬值的升序對權重和對應的模擬值數組進行排序sort(weight, modValue);ecdf = cumsum(weight);std::vector<double> sampPtile = CalPercentiles(modValue);sampLowerPtile.emplace_back(sampPtile.at(0));sampUpperPtile.emplace_back(sampPtile.at(1));sampMedian.emplace_back(sampPtile.at(2));} }void PAM::sort(std::vector<double>& w, std::vector<double>& x) {//將權重與模擬值對應上,以便于擴展排序std::vector< std::pair<double, double> > wx;//構造向量wx,存儲權重和模擬值對for (size_t i = 0; i < w.size(); ++i){wx.emplace_back(w[i], x[i]);}//匿名函數定義比較規則,用于pair數據結構按照權重升序排列std::sort(wx.begin(), wx.end(), [](auto a, auto b) { return a.second < b.second; });//將排序后的wx賦值給權重數組w和函模擬值數組xfor (size_t i = 0; i < wx.size(); i++){w[i] = wx[i].first; //取出權重x[i] = wx[i].second; //取出模擬值} }std::vector<double> PAM::cumsum(const std::vector<double>& w) {std::vector<double> ecdf;for (size_t i = 0; i < w.size(); i++){double wgt = std::accumulate(w.begin(), w.begin() + i, 0.0);ecdf.emplace_back(wgt);}return ecdf; }std::vector<double> PAM::CalPercentiles(const std::vector<double>& modValue) {std::vector<double> sampPtile;for (size_t i = 0; i < percentile.size(); i++){double percent = percentile.at(i);//構造臨時向量以尋找各分位數的位置std::vector<double> tempVector;for (double val : ecdf){double temp = fabs(val - percent);tempVector.emplace_back(temp);}auto iter = std::min_element(tempVector.begin(), tempVector.end());//確保所求范圍大于等于給定百分位數范圍if (percent <= 0.5){if (*iter > percent){iter -= 1;}}else{if (*iter < percent){iter += 1;}}sampPtile.emplace_back(modValue.at(iter - tempVector.begin()));}return sampPtile; }void PAM::SetPercentile() {double lowerBound, upperBound;lowerBound = (1 - confidenceLevel) / 2.0;upperBound = 1 - lowerBound;percentile = { lowerBound, upperBound, 0.5 }; }

main.cpp

#include <iostream>#include "PAM.h"int main() {PAM pamAlgorithm;pamAlgorithm.Estimate();}

文檔說明

本地目錄:E:\Research\OptA\MCMC

Gibbs:吉布斯采樣估計線性回歸模型參數

AM:自適應metropolis算法C++實現

RefCode:參考代碼

RefPaper:參考文獻

進度

2022/3/18,復現博客 Gibbs sampling for Bayesian linear regression in Python,實現吉布斯采樣估計線性回歸參數

2022/4/17 看MCMC首次引入水文模型參數不確定性估計的論文 Monte Carlo assessment of parameter uncertainty in conceptual catchment models Metropolis algorithm,文中所用的采樣算法為Metropolis算法

2022/4/18 從Github下載AM相關代碼,從知網下載AM相關論文。

論文中關于AM描述位置:

  • 基于AM-MCMC的地震反演方法研究_李遠 P27

  • 水文模型參數優選及不確定性分析方法研究 P39

  • 基于MCMC方法的概念性流域水文模型參數優選及不確定性研究 P63

  • 基于AM-MCMC算法的貝葉斯概率洪水預報模型_邢貞相 P2

  • Haario-2001-An-adaptive-metropolis-algorithm AM算法出處

2022/4/26 閱讀了上述文獻中的方法介紹,準備參考E:\Research\OptA\MCMC\RefCode\adaptive_metropolis-master\adaptive_metropolis-master代碼實現AM算法

2022/4/27 創建AM文件夾,用于C++實現AM算法

2022/4/28 嘗試用C++實現多元正態分布抽樣

2022/4/29 編寫AM程序

2022/5/4 初步完成程序編寫,可以運行出結果

2022/5/5 上午,完成二元一次方程測試;下午,將似然函數從納什效率底數NSE改為均方誤差MSE和BOX,效果顯著,參數分布明顯收斂;或許可以研究不同似然函數的影響。

2022/5/6 補充繪圖、收斂判斷、參考文獻

2022/5/9 并行抽樣

2022/5/10 完成AM-MCMC程序編寫,測試線性模型參數估計通過,撰寫博客

總結

以上是生活随笔為你收集整理的【算法】07 AM-MCMC算法C++实现的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。