日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

NILMTK——深扒组合优化(CO)和FHMM细节

發布時間:2023/12/20 编程问答 27 豆豆
生活随笔 收集整理的這篇文章主要介紹了 NILMTK——深扒组合优化(CO)和FHMM细节 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

前面的博客講了具體實現,現在深究算法代碼實現細節!!!

1.CO

(1)關于train

從以下代碼可知,CO首先是對各個電器的功率數據做了train,為了了解其原生實現對代碼進行了深究:

classifiers = {'CO':CombinatorialOptimisation()} predictions = {} sample_period = 120 ## 采樣周期是兩分鐘 for clf_name, clf in classifiers.items():print("*"*20)print(clf_name)print("*" *20)clf.train(top_5_train_elec, sample_period=sample_period) ### 訓練部分

進入代碼可知:

def train(self, metergroup, num_states_dict=None, **load_kwargs):"""Train using 1D CO. Places the learnt model in the `model` attribute.Parameters----------metergroup : a nilmtk.MeterGroup objectnum_states_dict : dict**load_kwargs : keyword arguments passed to `meter.power_series()`Notes-----* only uses first chunk for each meter (TODO: handle all chunks)."""if num_states_dict is None:num_states_dict = {}if self.model:raise RuntimeError("This implementation of Combinatorial Optimisation"" does not support multiple calls to `train`.")num_meters = len(metergroup.meters)if num_meters > 12:max_num_clusters = 2else:max_num_clusters = 3 # print(max_num_clusters)for i, meter in enumerate(metergroup.submeters().meters):print("Training model for submeter '{}'".format(meter))power_series = meter.power_series(**load_kwargs)chunk = next(power_series)num_total_states = num_states_dict.get(meter)if num_total_states is not None:num_on_states = num_total_states - 1else:num_on_states = Noneself.train_on_chunk(chunk, meter, max_num_clusters, num_on_states)# Check to see if there are any more chunks.# TODO handle multiple chunks per appliance.try:next(power_series)except StopIteration:passelse:warn("The current implementation of CombinatorialOptimisation"" can only handle a single chunk. But there are multiple"" chunks available. So have only trained on the"" first chunk!")print("Done training!")

簡單來看傳入參數有:metergroup-->?top_5_train_elec (五種用電量較高的電器)

? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? num_states_dict=None,(建立一個字典,后期沒找到用途)

? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? **load_kwargs? -->sample_period=sample_period(采樣周期是120秒,也就是2兩分鐘)

設置參數有:如果訓練的電器數量大于12,將聚類的數量置為2,其他則為3,我們此處為5種電器,因此,max_num_clusters =3

總體過程:遍歷這5種電器數據,每類電器進行單獨聚類,返回每個電器的聚類結果,此結果為不同電器在不同狀態下的功率數值。

每種電器訓練過程:

如果model存在,則提示已經訓練好,無需二次訓練,否則就進行電器的聚類,輸入參數為:chunk-->每個電器的功率數據;max_num_clusters 聚類數。并將每個電器的聚類結果記錄states,training_metadata參數保存成model。

def train_on_chunk(self, chunk, meter, max_num_clusters, num_on_states):# Check if we've already trained on this metermeters_in_model = [d['training_metadata'] for d in self.model]if meter in meters_in_model:raise RuntimeError("Meter {} is already in model!"" Can't train twice on the same meter!".format(meter))states = cluster(chunk, max_num_clusters, num_on_states) # print('states',states)self.model.append({'states': states,'training_metadata': meter})

結果為:

聚類詳解:

通過_transform_data(X)進行數據格式轉換,主要是將pd.Series或者單列的pd.DataFrame轉成ndarray數據,然后再對數據進行聚類,得到每個類別的質心值,然后增加設備off狀態的功率數據,按理說聚類傳入的參數是3,在增加一個狀態,應該是四個狀態值,事實上只有3個狀態,繼續深究可知!!

def cluster(X, max_num_clusters=3, exact_num_clusters=None):'''Applies clustering on reduced data,i.e. data where power is greater than threshold.Parameters----------X : pd.Series or single-column pd.DataFramemax_num_clusters : intReturns-------centroids : ndarray of int32sPower in different states of an appliance, sorteda'''# Find where power consumption is greater than 10data = _transform_data(X)# Find clusterscentroids = _apply_clustering(data, max_num_clusters, exact_num_clusters) # print('centroids',centroids)centroids = np.append(centroids, 0) # add 'off' statecentroids = np.round(centroids).astype(np.int32)centroids = np.unique(centroids) # np.unique also sorts# TODO: Merge similar clustersreturn centroids

電器狀態值缺少的原因:

由_apply_clustering函數的for n_clusters in range(1, max_num_clusters)可知,在max_num_clusters取值為3的情況下,n_clusters 的取值為1,2,因此少了一個狀態!額外的,在進行聚類的時候,將每個電器的數據聚類成1簇,2簇,并采用了聚類算法的輪廓系數(sklearn.metrics.silhouette_score)選取了最好的n_clusters,即為n_clusters=2.

def _apply_clustering(X, max_num_clusters, exact_num_clusters=None):'''Parameters----------X : ndarraymax_num_clusters : intReturns-------centroids : list of numbersList of power in different states of an appliance'''# If we import sklearn at the top of the file then it makes autodoc failfrom sklearn import metrics# Finds whether 2 or 3 gives better Silhouellete coefficient# Whichever is higher serves as the number of clusters for that# appliancenum_clus = -1sh = -1k_means_labels = {}k_means_cluster_centers = {}k_means_labels_unique = {}# If the exact number of clusters are specified, then use thatif exact_num_clusters is not None:labels, centers = _apply_clustering_n_clusters(X, exact_num_clusters)return centers.flatten()# Exact number of clusters are not specified, use the cluster validity measures# to find the optimal numberfor n_clusters in range(1, max_num_clusters):try:labels, centers = _apply_clustering_n_clusters(X, n_clusters) # print('labels, centers',labels, centers)k_means_labels[n_clusters] = labelsk_means_cluster_centers[n_clusters] = centersk_means_labels_unique[n_clusters] = np.unique(labels)try:sh_n = metrics.silhouette_score(X, k_means_labels[n_clusters], metric='euclidean')if sh_n > sh:sh = sh_nnum_clus = n_clustersexcept Exception:num_clus = n_clustersexcept Exception:if num_clus > -1:return k_means_cluster_centers[num_clus]else:return np.array([0])print(k_means_cluster_centers[num_clus].flatten())return k_means_cluster_centers[num_clus].flatten()

選取聚類算法為Kmeans!!!

def _apply_clustering_n_clusters(X, n_clusters):""":param X: ndarray:param n_clusters: exact number of clusters to use:return:"""from sklearn.cluster import KMeansk_means = KMeans(init='k-means++', n_clusters=n_clusters)k_means.fit(X)return k_means.labels_, k_means.cluster_centers_

此為訓練結果的全過程!

(2)disaggregate_chunk

分解時候的函數是用的disaggregate_chunk,得到房間的總功率曲線,并對5種電器進行分解。

首先是將之前train()過程的質心提取出來,并做一個枚舉操作,cartesian函數是做枚舉操作,由5個模型,每個模型3個狀態,則可得3*3*3*3*3=243行,5列的狀態組合數據。

def _set_state_combinations_if_necessary(self):"""Get centroids"""# If we import sklearn at the top of the file then auto doc fails.#枚舉所有可能性if (self.state_combinations is None orself.state_combinations.shape[1] != len(self.model)):from sklearn.utils.extmath import cartesiancentroids = [model['states'] for model in self.model] # print(len(centroids),len(centroids[0]))self.state_combinations = cartesian(centroids)

接下來,對狀態數據進行按列累加,然后調用find_nearest函數,求得負荷數據和狀態數據的殘差和具體索引值。find_nearest的傳入參數有按列累加之后的狀態數據,用戶的總功率數據。

def find_nearest(known_array, test_array):"""Find closest value in `known_array` for each element in `test_array`.Parameters----------known_array : numpy arrayconsisting of scalar values only; shape: (m, 1)test_array : numpy arrayconsisting of scalar values only; shape: (n, 1)Returns-------indices : numpy array; shape: (n, 1)For each value in `test_array` finds the index of the closest valuein `known_array`.residuals : numpy array; shape: (n, 1)For each value in `test_array` finds the difference from the closestvalue in `known_array`."""# from http://stackoverflow.com/a/20785149/732596#將x中的元素從小到大排列,提取其對應的index(索引),從小到大排序index_sorted = np.argsort(known_array)known_array_sorted = known_array[index_sorted]#將功率值插入到known_array_sorted對應的位置,并返回indexidx1 = np.searchsorted(known_array_sorted, test_array)idx2 = np.clip(idx1 - 1, 0, len(known_array_sorted)-1)idx3 = np.clip(idx1, 0, len(known_array_sorted)-1)#上限-實際值diff1 = known_array_sorted[idx3] - test_array#實際值-下限diff2 = test_array - known_array_sorted[idx2]#找到差據最小的indexindices = index_sorted[np.where(diff1 <= diff2, idx3, idx2)]#殘差residuals = test_array - known_array[indices]return indices, residuals

得到索引值和殘差之后,就可通過每個model來分別進行計算了,此時indices_of_state_combinations與indices等價。

appliance_powers_dict = {}for i, model in enumerate(self.model):print("Estimating power demand for '{}'".format(model['training_metadata']))predicted_power = state_combinations[indices_of_state_combinations, i].flatten()column = pd.Series(predicted_power, index=mains.index, name=i) # print(column)appliance_powers_dict[self.model[i]['training_metadata']] = columnappliance_powers = pd.DataFrame(appliance_powers_dict, dtype='float32')

主要的分解核心代碼在于

state_combinations[
? ? ? ? ? ? ? ? indices_of_state_combinations, i].flatten()

下面看一個小例子:

由此可知,分別按列遍歷,然后選取indices_of_state_combinations對應位置的值為當前電器的分解結果。

2.FHMM

(1)關于train

在train函數里邊頻繁使用了好幾次GaussianHMM,沒有理解其中深意。先看代碼吧,首先傳入meters,檢查內存是否能支撐狀態轉換概率矩陣的計算。

def _check_memory(num_appliances):"""Checks if the maximum resident memory is enough to handle the combined matrix of transition probabilities"""# Each transmat is small (usually 2x2 or 3x3) but the combined# matrix is dense, using much more memory# Get the approximate memory in MBtry:# If psutil is installed, we can get the correct total # physical memory of the systemimport psutilavailable_memory = psutil.virtual_memory().total >> 20except ImportError:# Otherwise use a crude approximationavailable_memory = 16 << 10# We use (num_appliances + 1) here to get a pessimistic approximation:# 8 bytes * (2 ** (num_appliances + 1)) ** 2required_memory = ((1 << (2 * (num_appliances + 1))) << 3) >> 20if required_memory >= available_memory:warn("The required memory for the model may be more than the total system memory!"" Try using fewer appliances if the training fails.")

然后,遍歷每個電器,獲取每個電器的狀態數據,經過一系列判斷語句,最終發現他用了聚類來確定每個電器的狀態。

if num_total_states is None:states = cluster(meter_data, max_num_clusters)num_total_states = len(states)

然后調用hmmlearn的GaussianHMM來進行模型訓練。

print("Training model for submeter '{}' with {} states".format(meter, num_total_states)) learnt_model[meter] = hmm.GaussianHMM(num_total_states, "full") # Fit learnt_model[meter].fit(X)

對GaussianHMM計算的means_結果進行排序,然后根據means_索引值得到相應的startprob,covars,transmat等,然后在進行一次GaussianHMM,并對參數賦值。

self.meters = [] new_learnt_models = OrderedDict() for meter in learnt_model:startprob, means, covars, transmat = sort_learnt_parameters(learnt_model[meter].startprob_, learnt_model[meter].means_,learnt_model[meter].covars_, learnt_model[meter].transmat_)new_learnt_models[meter] = hmm.GaussianHMM(startprob.size, "full")new_learnt_models[meter].startprob_ = startprobnew_learnt_models[meter].transmat_ = transmatnew_learnt_models[meter].means_ = meansnew_learnt_models[meter].covars_ = covars# UGLY! But works.self.meters.append(meter)

均值排序計算的代碼如下:

def return_sorting_mapping(means):means_copy = deepcopy(means)means_copy = np.sort(means_copy, axis=0)# Finding mappingmapping = {}for i, val in enumerate(means_copy):mapping[i] = np.where(val == means)[0][0]return mapping

例子:設定a=[1,5,3,4,2],return_sorting_mapping返回的是均值從小到大的索引值。

其余參數計算皆以均值有序值進行計算,代碼如下:

def sort_startprob(mapping, startprob):""" Sort the startprob according to power means; as returned by mapping"""num_elements = len(startprob)new_startprob = np.zeros(num_elements)for i in range(len(startprob)):new_startprob[i] = startprob[mapping[i]]return new_startprobdef sort_covars(mapping, covars):new_covars = np.zeros_like(covars)for i in range(len(covars)):new_covars[i] = covars[mapping[i]]return new_covarsdef sort_transition_matrix(mapping, A):"""Sorts the transition matrix according to increasing order ofpower means; as returned by mappingParameters----------mapping :A : numpy.array of shape (k, k)transition matrix"""num_elements = len(A)A_new = np.zeros((num_elements, num_elements))for i in range(num_elements):for j in range(num_elements):A_new[i, j] = A[mapping[i], mapping[j]]return A_new

然后又對結果值做了一個GaussianHMM,代碼如下:

def create_combined_hmm(model):list_pi = [model[appliance].startprob_ for appliance in model]list_A = [model[appliance].transmat_ for appliance in model]list_means = [model[appliance].means_.flatten().tolist()for appliance in model]pi_combined = compute_pi_fhmm(list_pi)A_combined = compute_A_fhmm(list_A)[mean_combined, cov_combined] = compute_means_fhmm(list_means)combined_model = hmm.GaussianHMM(n_components=len(pi_combined), covariance_type='full')combined_model.startprob_ = pi_combinedcombined_model.transmat_ = A_combinedcombined_model.covars_ = cov_combinedcombined_model.means_ = mean_combined

又對means,transmat,startprob三個狀態數據做了轉換,代碼如下:

def compute_A_fhmm(list_A):"""Parameters-----------list_pi : List of PI's of individual learnt HMMsReturns--------result : Combined Pi for the FHMM"""result = list_A[0]for i in range(len(list_A) - 1):result = np.kron(result, list_A[i + 1])return resultdef compute_means_fhmm(list_means):"""Returns-------[mu, cov]"""states_combination = list(itertools.product(*list_means))num_combinations = len(states_combination)means_stacked = np.array([sum(x) for x in states_combination])means = np.reshape(means_stacked, (num_combinations, 1))cov = np.tile(5 * np.identity(1), (num_combinations, 1, 1))return [means, cov]def compute_pi_fhmm(list_pi):"""Parameters-----------list_pi : List of PI's of individual learnt HMMsReturns-------result : Combined Pi for the FHMM"""result = list_pi[0]for i in range(len(list_pi) - 1):result = np.kron(result, list_pi[i + 1])return result

然后得到了模型:

(2)disaggregate_chunk

首先是獲取總表的功率數據,然后通過GaussianHMM的predict函數進行預測,然后在進行decode_hmm函數進行解碼,一頓操作猛如虎,還是沒看懂。

def disaggregate_chunk(self, test_mains):"""Disaggregate the test data according to the model learnt previouslyPerforms 1D FHMM disaggregation.For now assuming there is no missing data at this stage."""# See v0.1 code# for ideas of how to handle missing data in this code if needs be.# Array of learnt stateslearnt_states_array = []test_mains = test_mains.dropna()length = len(test_mains.index)temp = test_mains.values.reshape(length, 1)learnt_states_array.append(self.model.predict(temp))print(learnt_states_array[0].shape)# Modelmeans = OrderedDict()for elec_meter, model in self.individual.items():means[elec_meter] = (model.means_.round().astype(int).flatten().tolist())means[elec_meter].sort()decoded_power_array = []decoded_states_array = []print(means.keys())for learnt_states in learnt_states_array:[decoded_states, decoded_power] = decode_hmm(len(learnt_states), means, means.keys(), learnt_states)decoded_states_array.append(decoded_states)decoded_power_array.append(decoded_power)prediction = pd.DataFrame(decoded_power_array[0], index=test_mains.index)return prediction

解碼函數:

def decode_hmm(length_sequence, centroids, appliance_list, states):"""Decodes the HMM state sequence"""hmm_states = {}hmm_power = {}total_num_combinations = 1for appliance in appliance_list:print(centroids[appliance])total_num_combinations *= len(centroids[appliance])print(total_num_combinations)for appliance in appliance_list:hmm_states[appliance] = np.zeros(length_sequence, dtype=np.int)hmm_power[appliance] = np.zeros(length_sequence)for i in range(length_sequence):factor = total_num_combinationsfor appliance in appliance_list:# assuming integer division (will cause errors in Python 3x)factor = factor // len(centroids[appliance])temp = int(states[i]) / factorhmm_states[appliance][i] = temp % len(centroids[appliance])hmm_power[appliance][i] = centroids[appliance][hmm_states[appliance][i]]return [hmm_states, hmm_power]

每個電器有3個狀態均值,共有243種組合方式,然后進行了除和取余操作,實現對數據的分解。

粗略解析只能到這里了,具體還要看HMM等相關理論才能想明白那些操作吧!

代碼源于nilmtk包的源碼文件.

總結

以上是生活随笔為你收集整理的NILMTK——深扒组合优化(CO)和FHMM细节的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。