日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

【TensorFlow】 基于视频时序LSTM的行为动作识别

發布時間:2025/3/11 编程问答 17 豆豆
生活随笔 收集整理的這篇文章主要介紹了 【TensorFlow】 基于视频时序LSTM的行为动作识别 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

簡介
本文基于LSTM來完成用戶行為識別。數據集來源:https://archive.ics.uci.edu/ml/machine-learning-databases/00240/

此數據集一共有6種行為狀態:

行走;
站立;
躺下;
坐下;
上樓;
下樓;
以上6種行為數據是通過傳感器進行采集的。

.\data\UCI HAR Dataset\train\Inertial Signals

?

實現
本次實驗實現的是6分類任務。

pip install -i https://pypi.douban.com/simple/ --trusted-host=pypi.douban.com/simple tensorflow==1.13.1

?

import tensorflow as tf import numpy as np# # 模型好壞主要由數據決定,數據決定模型上限,模型決定逼近這個上限,記錄儀上的數據 def load_X(X_signals_paths):X_signals = []for signal_type_path in X_signals_paths:file = open(signal_type_path, 'r')X_signals.append([np.array(serie, dtype=np.float32) for serie in[row.replace(' ', ' ').strip().split(' ') for row in file]])file.close()return np.transpose(np.array(X_signals), (1, 2, 0))def load_y(y_path):file = open(y_path, 'r')y_ = np.array([elem for elem in [row.replace(' ', ' ').strip().split(' ') for row in file]], dtype=np.int32)file.close()return y_ - 1class Config(object):def __init__(self, X_train, X_test):self.train_count = len(X_train) # 訓練記錄self.test_data_count = len(X_test)self.n_steps = len(X_train[0]) # 步長,128步self.learning_rate = 0.0025self.lambda_loss_amount = 0.0015 # 正則化懲罰粒度self.training_epochs = 300self.batch_size = 1500self.n_inputs = len(X_train[0][0]) # 每個step收集9個,數據收集維度self.n_hidden = 32 # 隱層神經元個數self.n_classes = 6 # 輸出6個類別self.W = {'hidden': tf.Variable(tf.random_normal([self.n_inputs, self.n_hidden])), # 輸入到隱層'output': tf.Variable(tf.random_normal([self.n_hidden, self.n_classes]))} # 隱層到輸出self.biases = {'hidden': tf.Variable(tf.random_normal([self.n_hidden], mean=1.0)),'output': tf.Variable(tf.random_normal([self.n_classes]))}# 構造LSTM網絡 def LSTM_Network(_X, config):# 數據轉換,使其滿足LSTM網絡要求_X = tf.transpose(_X, [1, 0, 2]) # 把0 1 2調換成1 0 2,調換第一維度和第二維度_X = tf.reshape(_X, [-1, config.n_inputs])_X = tf.nn.relu(tf.matmul(_X, config.W['hidden']) + config.biases['hidden']) # 9個神經元變為32_X = tf.split(_X, config.n_steps, 0) # 把每一步放到RNN對應的位置# 兩層LSTM堆疊在一起lstm_cell_1 = tf.contrib.rnn.BasicLSTMCell(config.n_hidden, forget_bias=1.0, state_is_tuple=True)lstm_cell_2 = tf.contrib.rnn.BasicLSTMCell(config.n_hidden, forget_bias=1.0, state_is_tuple=True)lstm_cells = tf.contrib.rnn.MultiRNNCell([lstm_cell_1, lstm_cell_2], state_is_tuple=True)outputs, states = tf.contrib.rnn.static_rnn(lstm_cells, _X, dtype=tf.float32) # outputs:最終結果; states:中間結果print(np.array(outputs).shape) # 查看outputs,128個輸出結果lstm_last_output = outputs[-1] # 取最終結果return tf.matmul(lstm_last_output, config.W['output']) + config.biases['output'] # 分類def one_hot(y_):y_ = y_.reshape(len(y_))n_values = int(np.max(y_)) + 1return np.eye(n_values)[np.array(y_, dtype=np.int32)]if __name__ == '__main__':# 指定九種不同輸入信號,即9個文件的文件名前綴INPUT_SIGNAL_TYPES = ['body_acc_x_','body_acc_y_','body_acc_z_','body_gyro_x_','body_gyro_y_','body_gyro_z_','total_acc_x_','total_acc_y_','total_acc_z_']# 六種行為標簽,行走 站立 躺下 坐下 上樓 下樓LABELS = ['WALKING','WALKING_UPSTAIRS','WALKING_DOWNSTAIRS','SITTING','STANDING','LAYING']# 指定數據路徑DATA_PATH = 'data/'DATASET_PATH = DATA_PATH + 'UCI HAR Dataset/'print('\n' + 'Dataset is now located at:' + DATASET_PATH)TRAIN = 'train/'TEST = 'test/'X_train_signals_paths = [DATASET_PATH + TRAIN + 'Inertial Signals/' + signal + 'train.txt' for signal in INPUT_SIGNAL_TYPES]X_test_signals_paths = [DATASET_PATH + TEST + 'Inertial Signals/' + signal + 'test.txt' for signal inINPUT_SIGNAL_TYPES]X_train = load_X(X_train_signals_paths)X_test = load_X(X_test_signals_paths)print('X_train:', X_train.shape) # 7352條數據,每個數據128窗口序列,每個序列記錄9個不同指標print('X_test:', X_test.shape)y_train_path = DATASET_PATH + TRAIN + 'y_train.txt'y_test_path = DATASET_PATH + TEST + 'y_test.txt'y_train = one_hot(load_y(y_train_path))y_test = one_hot(load_y(y_test_path))print('y_train:', y_train.shape) # 7352條數據,6個類別print('y_test:', y_test.shape)config = Config(X_train, X_test)print("Some useful info to get an insight on dataset's shape and normalisation:")print("features shape, labels shape, each features mean, each features standard deviation")print(X_test.shape, y_test.shape,np.mean(X_test), np.std(X_test))print('the dataset is therefore properly normalised, as expected.')X = tf.placeholder(tf.float32, [None, config.n_steps, config.n_inputs])Y = tf.placeholder(tf.float32, [None, config.n_classes])pred_Y = LSTM_Network(X, config) # 最終預測結果# l2正則懲罰,tf.trainable_variables()(僅可以查看可訓練的變量)l2 = config.lambda_loss_amount * \sum(tf.nn.l2_loss(tf_var) for tf_var in tf.trainable_variables())# 損失值cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=Y, logits=pred_Y)) + l2optimizer = tf.train.AdamOptimizer(learning_rate=config.learning_rate).minimize(cost)correct_pred = tf.equal(tf.argmax(pred_Y, 1), tf.argmax(Y, 1))accuracy = tf.reduce_mean(tf.cast(correct_pred, dtype=tf.float32))# tf.InteractiveSession():可以先構建一個session然后再定義操作(operation)# tf.Session():需要在會話構建之前定義好全部的操作(operation)然后再構建會話# tf.ConfigProto():獲取到 operations 和 Tensor 被指派到哪個設備(幾號CPU或幾號GPU)上運行# log_device_placement=False:不會在終端打印出各項操作是在哪個設備上運行sess = tf.InteractiveSession(config=tf.ConfigProto(log_device_placement=False))init = tf.global_variables_initializer()sess.run(init)best_accuracy = 0.0for i in range(config.training_epochs):# zip() 函數用于將可迭代的對象作為參數,將對象中對應的元素打包成一個個元組,然后返回由這些元組組成的列表for start, end in zip(range(0, config.train_count, config.batch_size),range(config.batch_size, config.train_count + 1, config.batch_size)):sess.run(optimizer, feed_dict={X: X_train[start:end],Y: y_train[start:end]})# 也可對迭代過程進行可視化展示pred_out, accuracy_out, loss_out = sess.run([pred_Y, accuracy, cost], feed_dict={X: X_test, Y: y_test})print('traing iter: {},'.format(i) + 'test accuracy: {},'.format(accuracy_out) + 'loss:{}'.format(loss_out))best_accuracy = max(best_accuracy, accuracy_out)print('')print('final test accuracy: {}'.format(accuracy_out))print("best epoch's test accuracy: {}".format(best_accuracy))print('')

?


運行結果:

Dataset is now located at:data/UCI HAR Dataset/ X_train: (7352, 128, 9) X_test: (2947, 128, 9) y_train: (7352, 6) y_test: (2947, 6) Some useful info to get an insight on dataset's shape and normalisation: features shape, labels shape, each features mean, each features standard deviation (2947, 128, 9) (2947, 6) 0.09913992 0.39567086 the dataset is therefore properly normalised, as expected. WARNING:tensorflow:From D:/WorkSpace/ai/csdn/lab-lstm-activity-recognition/lstm.py:58: BasicLSTMCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version. Instructions for updating: This class is deprecated, please use tf.nn.rnn_cell.LSTMCell, which supports all the feature this cell currently has. Please replace the existing code with tf.nn.rnn_cell.LSTMCell(name='basic_lstm_cell'). (128,) WARNING:tensorflow:From D:/WorkSpace/ai/csdn/lab-lstm-activity-recognition/lstm.py:141: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version. Instructions for updating: Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default. See `tf.nn.softmax_cross_entropy_with_logits_v2`. 2019-12-16 19:45:10.908801: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 traing iter: 0,test accuracy: 0.4801492989063263,loss:1.9456521272659302 final test accuracy: 0.4801492989063263 best epoch's test accuracy: 0.4801492989063263traing iter: 1,test accuracy: 0.5334238409996033,loss:1.6313532590866089final test accuracy: 0.5334238409996033 best epoch's test accuracy: 0.5334238409996033 traing iter: 2,test accuracy: 0.6128265857696533,loss:1.4844205379486084 final test accuracy: 0.6128265857696533 best epoch's test accuracy: 0.6128265857696533

?

總結

以上是生活随笔為你收集整理的【TensorFlow】 基于视频时序LSTM的行为动作识别的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 久热精品视频在线播放 | 美女自拍偷拍 | 蜜美杏av| 泰坦尼克号3小时49分的观看方法 | 黄色在线视频播放 | 91av高清| 国产av无码国产av毛片 | 久热国产视频 | 国产学生美女无遮拦高潮视频 | 激情av综合 | 日韩一区二区三免费高清在线观看 | 日本美女三级 | a人片| 18精品爽国产白嫩精品 | 福利一区视频 | 天天干干干 | 亚洲高清在线观看视频 | 人妻丰满熟妇岳av无码区hd | 91蝌蚪少妇偷拍 | 非洲黑人毛片 | 老头老太做爰xxx视频 | 九九久久久久 | 久久精品激情 | 高h文在线 | 精品成人在线 | 天堂在线一区二区 | 国产一区二区黄 | aaa黄色| 老司机深夜网站 | 双乳被四个男人吃奶h文 | 亚洲少妇一区二区 | 可以看毛片的网站 | 精品成人中文无码专区 | 在线看片亚洲 | 三级免费网址 | 国产成人精品一区二区三区免费 | 清冷学长被爆c躁到高潮失禁 | 97视频在线 | 免费黄在线看 | 亚洲一区二区三区黄色 | 好姑娘在线观看高清完整版电影 | 亚洲不卡电影 | 日韩av激情| 午夜av福利 | 九草视频在线观看 | 五月婷视频 | 一二三四区在线 | 一区二区福利电影 | 成人免费网站www网站高清 | 翔田千里一区二区三区av | 无码久久精品国产亚洲av影片 | 久久精品国产亚洲av麻豆图片 | 在线观看国产91 | 97久久人人 | 国产精品久久777777换脸 | 国产五月婷婷 | 国产成人综合在线观看 | 国产中文字幕在线免费观看 | 国产调教视频在线观看 | 一级黄av| 97在线看 | 国产夫妻自拍av | 中文字幕在线观看91 | 日日干夜夜草 | 中文字幕在线导航 | 久久爰 | 啪啪免费网址 | 日批视频免费在线观看 | 欧美在线观看一区二区三区 | 好邻居韩国剧在线观看 | 天天爱天天操 | 五月婷婷狠狠爱 | 在线一区二区不卡 | 香蕉综合网 | 久久国产精品久久久久久 | 深夜的私人秘书 | h片在线免费 | 午夜福利电影一区二区 | 91丝袜一区二区三区 | 久久久久久一 | 欧美性xxxxx 亚洲特黄一级片 | 中文字幕2021 | 日干夜操 | 在线免费观看高清视频 | 精品一区日韩 | 久久综合婷婷国产二区高清 | 在线高清免费观看 | 日本成人在线一区 | 国产欧美精品区一区二区三区 | 国产3p露脸普通话对白 | 免费一级欧美 | 91免费视| 青娱乐在线免费观看 | 日本精品一区二区三区视频 | 中文自拍| 亚洲精品一区二区三区区别 | 亚洲视频二区 | 欧美女优在线观看 | 中国女人内谢69xxxx免费视频 |