日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

GCN代码超详解析Two-stream adaptive graph convolutional network for Skeleton-Based Action Recognition(一)

發(fā)布時間:2025/4/16 编程问答 21 豆豆
生活随笔 收集整理的這篇文章主要介紹了 GCN代码超详解析Two-stream adaptive graph convolutional network for Skeleton-Based Action Recognition(一) 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

代碼地址:https://github.com/lshiwjx/2s-AGCN

這個圖用于說明人體關(guān)鍵節(jié)點的定義及其連接方式

這個文件是根據(jù)NTURGB-D中關(guān)鍵點的定義計算骨骼長度

所以最終得到的數(shù)據(jù)結(jié)構(gòu)
N=sample數(shù)量,C=(channel,3固定),T=幀,V=(關(guān)節(jié)數(shù),固定25),M=(最大人數(shù),固定2)

上述文件合并了關(guān)節(jié)和骨骼信息
下面對2s-AGCN/data_gen/preprocess.py文件注釋說明,注釋寫的有點長,一幕放不下,所以我把代碼拆分了,這樣滾輪可以在一幕里方便查看,但是代碼行號就與原代碼對應(yīng)不上了,見諒

import syssys.path.extend(['../']) from data_gen.rotation import * from tqdm import tqdmdef pre_normalization(data, zaxis=[0, 1], xaxis=[8, 4]):N, C, T, V, M = data.shapes = np.transpose(data, [0, 4, 2, 3, 1]) # N, C, T, V, M to N, M, T, V, Cprint('pad the null frames with the previous frames')for i_s, skeleton in enumerate(tqdm(s)): # 選中一個sample if skeleton.sum() == 0:print(i_s, ' has no skeleton')for i_p, person in enumerate(skeleton):#在sample中選中一個personif person.sum() == 0:#對當前矩陣所有內(nèi)容求和continueif person[0].sum() == 0:#如果這個person的第0幀對應(yīng)的所有內(nèi)容=0index = (person.sum(-1).sum(-1) != 0)#如果最后一幀的最后一個關(guān)節(jié)存在,index=1,否則為0tmp = person[index].copy()#復(fù)制這一幀的內(nèi)容person *= 0#清空person[:len(tmp)] = tmp#全部賦值為相同的內(nèi)容for i_f, frame in enumerate(person):#選中一幀if frame.sum() == 0:if person[i_f:].sum() == 0:#如果當前這個人對應(yīng)的這個幀之后的內(nèi)容都為0rest = len(person) - i_fnum = int(np.ceil(rest / i_f))#使用有意義的數(shù)據(jù)循環(huán)填充pad = np.concatenate([person[0:i_f] for _ in range(num)], 0)[:rest]s[i_s, i_p, i_f:] = padbreak print('sub the center joint #1 (spine joint in ntu and neck joint in kinetics)')for i_s, skeleton in enumerate(tqdm(s)):if skeleton.sum() == 0:continuemain_body_center = skeleton[0][:, 1:2, :].copy()#選取一個sample下的一個person,它的第1個節(jié)點定義為main_body_centerfor i_p, person in enumerate(skeleton):if person.sum() == 0:continuemask = (person.sum(-1) != 0).reshape(T, V, 1)s[i_s, i_p] = (s[i_s, i_p] - main_body_center) * mask#減去main_body_center的內(nèi)容print('parallel the bone between hip(jpt 0) and spine(jpt 1) of the first person to the z axis')#將第一個人的髖部(jpt 0)與脊柱(jpt 1)之間的骨與z軸平行for i_s, skeleton in enumerate(tqdm(s)):if skeleton.sum() == 0:continuejoint_bottom = skeleton[0, 0, zaxis[0]]joint_top = skeleton[0, 0, zaxis[1]]axis = np.cross(joint_top - joint_bottom, [0, 0, 1]) angle = angle_between(joint_top - joint_bottom, [0, 0, 1])matrix_z = rotation_matrix(axis, angle)for i_p, person in enumerate(skeleton):if person.sum() == 0:continuefor i_f, frame in enumerate(person):if frame.sum() == 0:continuefor i_j, joint in enumerate(frame):s[i_s, i_p, i_f, i_j] = np.dot(matrix_z, joint)print('parallel the bone between right shoulder(jpt 8) and left shoulder(jpt 4) of the first person to the x axis')#將第一個人的右肩(jpt 8)和左肩(jpt 4)之間的骨與x軸平行for i_s, skeleton in enumerate(tqdm(s)):if skeleton.sum() == 0:continuejoint_rshoulder = skeleton[0, 0, xaxis[0]]joint_lshoulder = skeleton[0, 0, xaxis[1]]axis = np.cross(joint_rshoulder - joint_lshoulder, [1, 0, 0])angle = angle_between(joint_rshoulder - joint_lshoulder, [1, 0, 0])matrix_x = rotation_matrix(axis, angle) for i_p, person in enumerate(skeleton):if person.sum() == 0:continuefor i_f, frame in enumerate(person):if frame.sum() == 0:continuefor i_j, joint in enumerate(frame):s[i_s, i_p, i_f, i_j] = np.dot(matrix_x, joint)data = np.transpose(s, [0, 4, 2, 3, 1])return dataif __name__ == '__main__':data = np.load('../data/ntu/xview/val_data.npy')pre_normalization(data)np.save('../data/ntu/xview/data_val_pre.npy', data)

總結(jié)

以上是生活随笔為你收集整理的GCN代码超详解析Two-stream adaptive graph convolutional network for Skeleton-Based Action Recognition(一)的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。