日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人工智能 > pytorch >内容正文

pytorch

深度学习——3D Fully Convolutional Network for Vehicle Detection in Point Cloud模型实现

發布時間:2024/3/7 pytorch 102 豆豆
生活随笔 收集整理的這篇文章主要介紹了 深度学习——3D Fully Convolutional Network for Vehicle Detection in Point Cloud模型实现 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

1. 參考文獻

3D Fully Convolutional Network for Vehicle Detection in Point Cloud

2. 模型實現

''' Baidu Inc. Ref: 3D Fully Convolutional Network for Vehicle Detection in Point CloudAuthor: HSW Date: 2018-05-02 '''import sys import numpy as np import tensorflow as tf from prepare_data2 import * from baidu_cnn_3d import * KITTI_TRAIN_DATA_CNT = 7481 KITTI_TEST_DATA_CNT = 7518# create 3D-CNN Model def create_graph(sess, modelType = 0, voxel_shape = (400, 400, 20), activation=tf.nn.relu, is_train = True): '''Inputs: sess: tensorflow Session Object voxel_shape: voxel shape for network first layer activation: phrase_train: Outputs: voxel, graph, sess '''voxel = tf.placeholder(tf.float32, [None, voxel_shape[0], voxel_shape[1], voxel_shape[2], 1])phase_train = tf.placeholder(tf.bool, name="phase_train") if is_train else None with tf.variable_scope("3D_CNN_Model") as scope: model = Full_CNN_3D_Model()model.cnn3d_graph(voxel, modelType = modelType, activation=activation, phase_train = is_train)if is_train: initialized_var = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope="3D_CNN_model")sess.run(tf.variables_initializer(initialized_var))return voxel, model, phase_train# read batch data def read_batch_data(batch_size, data_set_dir,objectType = "Car", split = "training", resolution=(0.2, 0.2, 0.2), scale=0.25, limitX = (0,80), limitY=(-40,40), limitZ=(-2.5,1.5)): '''Inputs: batch_size: data_set_dir: objectType: default is "Car"split: default is "training"resolution: scale: outputSize / inputSize limitX: limitY: limitZ: Outputs: '''kitti_3DVoxel = kitti_3DVoxel_interface(data_set_dir, objectType = objectType, split=split, scale = scale, resolution = resolution, limitX = limitX, limitY = limitY, limitZ = limitZ)TRAIN_PROCESSED_IDX = 0TEST_PROCESSED_IDX = 0if split == "training": while TRAIN_PROCESSED_IDX < KITTI_TRAIN_DATA_CNT: batch_voxel = []batch_g_obj = []batch_g_cord = []idx = 0 while idx < batch_size and TRAIN_PROCESSED_IDX < KITTI_TRAIN_DATA_CNT: print(TRAIN_PROCESSED_IDX)voxel, g_obj, g_cord = kitti_3DVoxel.read_kitti_data(TRAIN_PROCESSED_IDX)TRAIN_PROCESSED_IDX += 1if voxel is None:continueidx += 1 # print(voxel.shape)batch_voxel.append(voxel)batch_g_obj.append(g_obj)batch_g_cord.append(g_cord)yield np.array(batch_voxel, dtype=np.float32)[:, :, :, :, np.newaxis], np.array(batch_g_obj, dtype=np.float32), np.array(batch_g_cord, dtype=np.float32)elif split == "testing": while TEST_PROCESSED_IDX < KITTI_TEST_DATA_CNT: batch_voxel = []idx = 0while idx < batch_size and TEST_PROCESSED_IDX < KITTI_TEST_DATA_CNT: voxel = kitti_3DVoxel.read_kitti_data(iter * batch_size + idx)TEST_PROCESSED_IDX += 1if voxel is None: continueidx += 1 batch_voxel.append(voxel)yield np.array(batch_voxel, dtype=np.float32)[:, :, :, :, np.newaxis]# train 3D-CNN Model def train(batch_num, data_set_dir, modelType = 0, objectType = "Car", resolution=(0.2,0.2,0.2), scale = 0.25, lr=0.01, limitX=(0,80), limitY=(-40,40), limitZ=(-2.5,1.5), epoch=101): '''Inputs: batch_num: data_set_dir: modelType: objectType: resolution: scale: lr: limitX, limitY, limitZ: Outputs: None'''batch_size = batch_numtraining_epochs = epochsizeX = int(round((limitX[1] - limitX[0]) / resolution[0]))sizeY = int(round((limitY[1] - limitY[0]) / resolution[0]))sizeZ = int(round((limitZ[1] - limitZ[0]) / resolution[0]))voxel_shape = (sizeX, sizeY, sizeZ)with tf.Session() as sess: voxel, model, phase_train = create_graph(sess, modelType = modelType, voxel_shape = voxel_shape, activation=tf.nn.relu, is_train = True)saver = tf.train.Saver()total_loss, obj_loss, cord_loss, is_obj_loss, non_obj_loss, g_obj, g_cord, y_pred = model.loss_Fun(lossType = 0, cord_loss_weight = 0.02)optimizer = model.create_optimizer(total_loss, optType = "Adam", learnRate = 0.001)init = tf.global_variables_initializer()sess.run(init)for epoch in range(training_epochs): batchCnt = 0; for (batch_voxel, batch_g_obj, batch_g_cord) in read_batch_data(batch_size, data_set_dir, objectType = objectType, split = "training", resolution = resolution, scale = scale, limitX = limitX, limitY = limitY, limitZ = limitZ): # print("batch_g_obj")# print(batch_g_obj.shape)sess.run(optimizer, feed_dict={voxel: batch_voxel, g_obj: batch_g_obj, g_cord: batch_g_cord, phase_train: True})cord_cost = sess.run(cord_loss, feed_dict={voxel: batch_voxel, g_obj: batch_g_obj, g_cord: batch_g_cord, phase_train: True})obj_cost = sess.run(is_obj_loss, feed_dict={voxel: batch_voxel, g_obj: batch_g_obj, g_cord: batch_g_cord, phase_train: True})non_obj_cost = sess.run(non_obj_loss, feed_dict={voxel: batch_voxel, g_obj: batch_g_obj, g_cord: batch_g_cord, phase_train: True})print("Epoch: ", (epoch + 1), ",", "BatchNum: ", (batchCnt + 1), "," , "cord_cost = ", "{:.9f}".format(cord_cost))print("Epoch: ", (epoch + 1), ",", "BatchNum: ", (batchCnt + 1), "," , "obj_cost = ", "{:.9f}".format(obj_cost))print("Epoch: ", (epoch + 1), ",", "BatchNum: ", (batchCnt + 1), "," , "non_obj_cost = ", "{:.9f}".format(non_obj_cost))batchCnt += 1if (epoch > 0) and (epoch % 10 == 0): saver.save(sess, "velodyne_kitti_train_" + str(epoch) + ".ckpt")print("Training Finishied !")# test 3D-CNN Model def test(batch_num, data_set_dir, modelType = 0, objectType = "Car", resolution=(0.2, 0.2, 0.2), scale = 0.25, limitX = (0, 80), limitY = (-40, 40), limitZ=(-2.5, 1.5)): '''Inputs: batch_num: data_set_dir: resolution: scale:limitX, limitY, limitZ: Outputs: None '''sizeX = int(round((limitX[1] - limitX[0]) / resolution[0]))sizeY = int(round((limitY[1] - limitY[0]) / resolution[0]))sizeZ = int(round((limitZ[1] - limitZ[0]) / resolution[0]))voxel_shape = (sizeX, sizeY, sizeZ)batch_size = batch_num; batch_voxel = read_batch_data(batch_num, data_set_dir, objectType = objectType, split="Testing", resolution=resolution, scale=scale, limitX=limitX, limitY=limitY, limitZ=limitZ)batch_voxel_x = batch_voxel.reshape(1, batch_voxel.shape[0], batch_voxel.shape[1], batch_voxel.shape[2], 1)with tf.Session() as sess: is_train = Falsevoxel, model, phase_train = create_graph(sess, modelType = modelType, voxel_shape = voxel_shape, activation=tf.nn.relu, is_train = False)new_saver = tf.train.import_meta_graph("velodyne_kitti_train_40.ckpt.meta")last_model = "./velodyne_kitti_train_40.ckpt"saver.restore(sess, last_model)objectness = model.objectnesscordinate = model.cordinatey_pred = model.y objectness = sess.run(objectness, feed_dict={voxel: batch_voxel_x})[0, :, :, :, 0]cordinate = sess.run(cordinate, feed_dict={voxel:batch_voxel_x})[0]y_pred = sess.run(y_pred, feed_dict={voxel: batch_voxel_x})[0, :, :, :, 0]idx = np.where(y_pred >= 0.995)spheres = np.vstack((index[0], np.vstack((index[1], index[2])))).transpose()centers = spheres_to_centers(spheres, scale = scale, resolution=resolution, limitX = limitX, limitY = limitY, limitZ = limitZ)corners = cordinate[idx].reshape[-1, 8, 3] + centers[:, np.newaxis]print(centers)print(corners)if __name__ == "__main__":batch_num = 3data_set_dir = "/home/hsw/桌面/PCL_API_Doc/frustum-pointnets-master/dataset"modelType = 1objectType = "Car"resolution = (0.2, 0.2, 0.2)scale = 0.25 lr = 0.001limitX = (0, 80)limitY = (-40, 40)limitZ = (-2.5, 1.5) epoch = 101 train(batch_num, data_set_dir = data_set_dir, modelType = modelType, objectType = objectType, resolution=resolution, scale=scale, lr =lr, limitX = limitX, limitY = limitY, limitZ = limitZ)saver = tf.train.Saver() 2.1 網絡模型
''' Baidu Inc. Ref: 3D Fully Convolutional Network for Vehicle Detection in Point CloudAuthor: HSW Date: 2018-05-02 '''import numpy as np import tensorflow as tf class Full_CNN_3D_Model(object): '''Define Full CNN Model'''def __init__(self): pass; def cnn3d_graph(self, voxel, modelType = 0, activation = tf.nn.relu, phase_train = True): if modelType == 0: # Modefied 3D-CNN, 該網絡結構不可使用,因為降采樣太嚴重(降采樣1/8)導致在預測時會出現較大誤差 self.layer1 = self.conv3d_layer(voxel , 1, 16, 5, 5, 5, [1, 2, 2, 2, 1], name="layer1", activation=activation, phase_train=phase_train)self.layer2 = self.conv3d_layer(self.layer1, 16, 32, 5, 5, 5, [1, 2, 2, 2, 1], name="layer2", activation=activation, phase_train=phase_train)self.layer3 = self.conv3d_layer(self.layer2, 32, 64, 3, 3, 3, [1, 2, 2, 2, 1], name="layer3", activation=activation, phase_train=phase_train)self.layer4 = self.conv3d_layer(self.layer3, 64, 64, 3, 3, 3, [1, 1, 1, 1, 1], name="layer4", activation=activation, phase_train=phase_train)self.objectness = self.conv3D_to_output(self.layer4, 64, 2, 3, 3, 3, [1, 1, 1, 1, 1], name="objectness", activation=None)self.cordinate = self.conv3D_to_output(self.layer4, 64, 24, 3, 3, 3, [1, 1, 1, 1, 1], name="cordinate", activation=None)self.y = tf.nn.softmax(self.objectness, dim=-1)elif modelType == 1: # 3D-CNN(論文網絡結構: 降采樣1/4,即InputSize / OutputSize = 0.25)self.layer1 = self.conv3d_layer(voxel , 1, 10, 5, 5, 5, [1, 2, 2, 2, 1], name="layer1", activation=activation, phase_train=phase_train)self.layer2 = self.conv3d_layer(self.layer1, 10, 20, 5, 5, 5, [1, 2, 2, 2, 1], name="layer2", activation=activation, phase_train=phase_train)self.layer3 = self.conv3d_layer(self.layer2, 20, 30, 3, 3, 3, [1, 2, 2, 2, 1], name="layer3", activation=activation, phase_train=phase_train)base_shape = self.layer2.get_shape().as_list()obj_output_shape = [tf.shape(self.layer3)[0], base_shape[1], base_shape[2], base_shape[3], 2]cord_output_shape = [tf.shape(self.layer3)[0], base_shape[1], base_shape[2], base_shape[3], 24]self.objectness = self.deconv3D_to_output(self.layer3, 30, 2, 3, 3, 3, [1, 2, 2, 2, 1], obj_output_shape, name="objectness", activation=None)self.cordinate = self.deconv3D_to_output(self.layer3, 30, 24, 3, 3, 3, [1, 2, 2, 2, 1], cord_output_shape, name="cordinate", activation=None)self.y = tf.nn.softmax(self.objectness, dim=-1)# batch Normalize def batch_norm(self, inputs, phase_train = True, decay = 0.9, eps = 1e-5): '''Inputs: inputs: input data for last layer phase_train: True / False, = True is train, = False is Test Outputs: norm data for next layer '''gamma = tf.get_variable("gamma", shape=inputs.get_shape()[-1], dtype=tf.float32, initializer=tf.constant_initializer(1.0))beta = tf.get_variable("beta", shape=inputs.get_shape()[-1], dtype=tf.float32, initializer=tf.constant_initializer(0.0))pop_mean = tf.get_variable("pop_mean", trainable=False, shape=inputs.get_shape()[-1], dtype=tf.float32, initializer=tf.constant_initializer(0.0))pop_var = tf.get_variable("pop_var", trainable=False, shape=inputs.get_shape()[-1], dtype=tf.float32, initializer=tf.constant_initializer(1.0))axes = range(len(inputs.get_shape()) - 1)if phase_train == True:batch_mean, batch_var = tf.nn.moments(inputs, axes = [0, 1, 2, 3])train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean*(1 - decay))train_var = tf.assign(pop_var, pop_var * decay + batch_var * (1 - decay))with tf.control_dependencies([train_mean, train_var]):return tf.nn.batch_normalization(inputs, batch_mean, batch_var, beta, gamma, eps)else: return tf.nn.batch_normalization(inputs, pop_mean, pop_var, beta, gamma, eps)# 3D Conv Layer def conv3d_layer(self, inputs, inputs_dims, outputs_dims, height, width, length, stride, activation=tf.nn.relu, padding="SAME", name="", phase_train = True): '''Inputs: inputs: pre-Layer output inputs_dims: pre-Layer output channels outputs_dims: cur-Layer output channels [length, height, width]: cur-Layer conv3d kernel size stride: conv3d kernel move step in length/height/width axisactivation: default use relu activation function padding: conv3d 'padding' parameter Outputs: 3D Conv. Layer outputs '''with tf.variable_scope("conv3D" + name): # conv3d layer kernel kernel = tf.get_variable("weights", shape=[length, height, width, inputs_dims, outputs_dims], dtype = tf.float32, initializer=tf.truncated_normal_initializer(stddev=0.01))# conv3d layer bias bias = tf.get_variable("bias", shape=[outputs_dims], dtype=tf.float32, initializer=tf.constant_initializer(0.0))# conv3d conv = tf.nn.conv3d(inputs, kernel, stride, padding=padding)bias = tf.nn.bias_add(conv, bias)if activation:bias = activation(bias, name="activation")bias = self.batch_norm(bias, phase_train)return bias# 3D Conv to Classification Layer def conv3D_to_output(self, inputs, inputs_dims, outputs_dims, height, width, length, stride, activation=tf.nn.relu, padding="SAME", name="", phase_train = True): '''Inputs: inputs: pre-Layer outputs inputs_dims: pre-Layer output channels outputs_dims: cur-Layer output channels stride: conv3d kernel move step in length/height/width axisactivation: default use relu activation function padding: conv3d 'padding' parameter outputs_shape: de-conv outputs shape Outputs: conv outputs '''with tf.variable_scope("conv3D" + name):kernel = tf.get_variable("weights", shape=[length, height, width, inputs_dims, outputs_dims], dtype=tf.float32, initializer=tf.constant_initializer(0.01))conv = tf.nn.conv3d(inputs, kernel, stride, padding=padding)return conv # 3D Deconv. to Classification Layer def deconv3D_to_output(self, inputs, inputs_dims, outputs_dims, height, width, length, stride, output_shape, activation=tf.nn.relu, padding="SAME", name="", phase_train = True): '''Inputs: inputs: pre-Layer outputs inputs_dims: pre-Layer output channels outputs_dims: cur-Layer output channels stride: conv3d kernel move step in length/height/width axisactivation: default use relu activation function padding: conv3d 'padding' parameter outputs_shape: de-conv outputs shape Outputs: de-conv outputs '''with tf.variable_scope("deconv3D"+name):kernel = tf.get_variable("weights", shape=[length, height, width, outputs_dims, inputs_dims], dtype=tf.float32, initializer=tf.constant_initializer(0.01))deconv = tf.nn.conv3d_transpose(inputs, kernel, output_shape, stride, padding="SAME")return deconv # define loss def loss_Fun(self, lossType = 0, cord_loss_weight = 0.02): '''Inputs: lossType: = for difference loss Type cord_loss_weight: 0.02 Outputs: '''if lossType == 0: # print("g_obj")# print(self.cordinate.get_shape())g_obj = tf.placeholder(tf.float32, self.cordinate.get_shape().as_list()[:4])g_cord = tf.placeholder(tf.float32, self.cordinate.get_shape().as_list())non_g_obj = tf.subtract(tf.ones_like(g_obj, dtype=tf.float32), g_obj )elosion = 0.00001y = self.y is_obj_loss = -tf.reduce_sum(tf.multiply(g_obj , tf.log(y[:,:,:,:,0] + elosion))) # object loss non_obj_loss = tf.reduce_sum(tf.multiply(non_g_obj, tf.log(y[:, :, :, :, 0] + elosion))) # non-object loss cross_entropy = tf.add(is_obj_loss, non_obj_loss)obj_loss = cross_entropycord_diff = tf.multiply(g_obj , tf.reduce_sum(tf.square(tf.subtract(self.cordinate, g_cord)), 4)) # cord loss cord_loss = tf.multiply(tf.reduce_sum(cord_diff), cord_loss_weight)return tf.add(obj_loss, cord_loss), obj_loss, cord_loss, is_obj_loss, non_obj_loss, g_obj, g_cord, y # Create Optimizer def create_optimizer(self, all_loss, optType = "Adam", learnRate = 0.001): '''Inputs: all_loss: graph all_loss lr: learn rate Outputs: optimizer '''if optType == "Adam": opt = tf.train.AdamOptimizer(learnRate)optimizer = opt.minimize(all_loss)return optimizer

2.2? 數據預處理

'''Prepase KITTI data for 3D Object detection Ref: 3D Fully Convolutional Network for Vehicle Detection in Point CloudAuthor: Shiwen He Date: 28 April 2018 '''import numpy as np from kitti_object import kitti_object as kittiReader import kitti_util # lidar data => 3D Grid Voxel # filter lidar data by camera FoV def filter_camera_fov(pc): '''Inputs: pc: n x 3 Outputs: filter_pc: m x 3, m <= 3 Notices: FoV: from -45 degree to 45 degree '''logic_fov = np.logical_and((pc[:, 1] < pc[:, 0] - 0.27), (-pc[:, 1] < pc[:, 0] - 0.27))filter_pc = pc[logic_fov]return filter_pc # filter lidar data by detection range def filter_lidar_range(pc, limitX, limitY, limitZ):''' Inputs: pc: n x 3, limitX, limitY, limitZ: 1 x 2Outputs: filter_pc: m x 3, m <= n '''logic_x = np.logical_and(pc[:, 0] >= limitX[0], pc[:, 0] < limitX[1])logic_y = np.logical_and(pc[:, 1] >= limitY[0], pc[:, 1] < limitY[1])logic_z = np.logical_and(pc[:, 2] >= limitZ[0], pc[:, 2] < limitZ[1])logic_xyz = np.logical_and(logic_x, np.logical_and(logic_y, logic_z))filter_pc = pc[:, :3][logic_xyz]return filter_pc# filter center + corners def filter_center_corners(centers, corners, boxsizes, limitX, limitY, limitZ): '''Inputs: centers: n x 3 corners: n x 8 x 3 limitX, limitY, limitZ: 1 x 2Outputs: filter_centers: m x 3, m <= n filter_corners: m x 3, m <= n '''logic_x = np.logical_and(centers[:, 0] >= limitX[0], centers[:, 0] < limitX[1])logic_y = np.logical_and(centers[:, 1] >= limitY[0], centers[:, 1] < limitY[1])logic_z = np.logical_and(centers[:, 2] >= limitZ[0], centers[:, 2] < limitZ[1])logic_xyz = np.logical_and(logic_x, np.logical_and(logic_y, logic_z))filter_centers_1 = centers[logic_xyz, :]filter_corners_1 = corners[logic_xyz, :, :]filter_boxsizes_1 = boxsizes[logic_xyz, :]shape_centers = filter_centers_1.shape; filter_centers = np.zeros([shape_centers[0], 3])filter_corners = np.zeros([shape_centers[0], 8, 3]); filter_boxsizes = np.zeros([shape_centers[0], 3]); idx = 0for idx2 in range(shape_centers[0]): logic_x = np.logical_and(filter_corners_1[idx2, :, 0] >= limitX[0], filter_corners_1[idx2, :, 0] < limitX[1])logic_y = np.logical_and(filter_corners_1[idx2, :, 1] >= limitY[0], filter_corners_1[idx2, :, 1] < limitY[1])logic_z = np.logical_and(filter_corners_1[idx2, :, 2] >= limitZ[0], filter_corners_1[idx2, :, 2] < limitZ[1])logic_xyz = np.logical_and(logic_x, np.logical_and(logic_y, logic_z))if logic_xyz.all(): filter_centers[idx, :3] = filter_centers_1[idx2, :]filter_corners[idx, :8, :3] = filter_corners_1[idx2, :, :] filter_boxsizes[idx, :3] = filter_boxsizes_1[idx2, :]idx += 1 if idx > 0:return filter_centers[:idx, :], filter_corners[:idx, :, :], filter_boxsizes[:idx, :]else:return None, None, Nonedef filter_label(object3Ds, objectType = 'Car'): '''Inputs: object3Ds:objectType: Outputs: centers, corners, rotatey '''idx = 0data = np.zeros([50, 7]).astype(np.float32)for iter in object3Ds: if iter.type == "DontCare": continue;if iter.type == objectType: # position data[idx, 0] = iter.t[0]data[idx, 1] = iter.t[1]data[idx, 2] = iter.t[2]# size data[idx, 3] = iter.hdata[idx, 4] = iter.wdata[idx, 5] = iter.l # rotate data[idx, 6] = iter.ryidx += 1 if idx > 0:return data[:idx, :3], data[:idx, 3:6], data[:idx, 6]else:return None, None, Nonedef proj_to_velo(calib_data):"""Inputs: calib_data: Outputs: project matrix: from camera cordination to velodyne cordination"""rect = calib_data.R0; # calib_data["R0_rect"].reshape(3, 3)velo_to_cam = calib_data.V2C; # calib_data["Tr_velo_to_cam"].reshape(3, 4)inv_rect = np.linalg.inv(rect)inv_velo_to_cam = np.linalg.pinv(velo_to_cam[:, :3])return np.dot(inv_velo_to_cam, inv_rect)# corners_3d def compute_3d_corners(centers, sizes, rotates): ''' Inputs: centers: rotates: sizes: Outputs: corners_3d: n x 8 x 3 array in Lidar coord.'''# print(centers) corners = []for place, rotate, sz in zip(centers, rotates, sizes):x, y, z = placeh, w, l = szif l > 10:continuecorner = np.array([[x - l / 2., y - w / 2., z],[x + l / 2., y - w / 2., z],[x - l / 2., y + w / 2., z],[x - l / 2., y - w / 2., z + h],[x - l / 2., y + w / 2., z + h],[x + l / 2., y + w / 2., z],[x + l / 2., y - w / 2., z + h],[x + l / 2., y + w / 2., z + h],])corner -= np.array([x, y, z])rotate_matrix = np.array([[np.cos(rotate), -np.sin(rotate), 0],[np.sin(rotate), np.cos(rotate), 0],[0, 0, 1]])a = np.dot(corner, rotate_matrix.transpose())a += np.array([x, y, z])corners.append(a)corners_3d = np.array(corners) return corners_3d# lidar data to 3D Grid Voxel def lidar_to_binary_voxel(pc, resolution, limitX, limitY, limitZ):''' Inputs: pc: n x 3, resolution: 1 x 3, limitX, limitY, limitZ: 1 x 2Outputs: voxel: shape is inputSize '''voxel_pc = np.zeros_like(pc).astype(np.int32)# Compute PointCloud Position in 3D Grid voxel_pc[:, 0] = ((pc[:, 0] - limitX[0]) / resolution[0]).astype(np.int32)voxel_pc[:, 1] = ((pc[:, 1] - limitY[0]) / resolution[1]).astype(np.int32)voxel_pc[:, 2] = ((pc[:, 2] - limitZ[0]) / resolution[2]).astype(np.int32)# 3D Grid voxel = np.zeros((int(round(limitX[1] - limitX[0]) / resolution[0]), int(round(limitY[1] - limitY[0]) / resolution[1]), \int(round((limitZ[1] - limitZ[0]) / resolution[2])))) # 3D Grid Value voxel[voxel_pc[:, 0], voxel_pc[:, 1], voxel_pc[:, 2]] = 1return voxel # label center to 3D Grid Voxel Center(sphere) def center_to_sphere(centers, boxsize, scale, resolution, limitX, limitY, limitZ):''' Inputs: center: n x 3 boxsize: n x 3scale: 1 x 1, = outputSize / inputSizeresolution: 1 x 3limitX, limitY, limitZ: 1 x 2Outputs: spheres: m x 3, m <= n '''# from 3D Box's bottom center => 3D center move_center = centers.copy(); print("centers")print(centers)print("boxsize")print(boxsize)move_center[:, 2] = centers[:, 2] + boxsize[:, 0] / 2; # compute Label Center PointCloud Position in 3D Grid spheres = np.zeros_like(move_center).astype(np.int32)spheres[:, 0] = ((move_center[:, 0] - limitX[0]) / resolution[0] * scale).astype(np.int32)spheres[:, 1] = ((move_center[:, 1] - limitY[0]) / resolution[1] * scale).astype(np.int32)spheres[:, 2] = ((move_center[:, 2] - limitZ[0]) / resolution[2] * scale).astype(np.int32)print("move_center")print(move_center)print("spheres")print(spheres)return spheres# 3D Grid Voxel Center(sphere) to label center def sphere_to_center(spheres, scale, resolution, limitX, limitY, limitZ): '''Inputs: spheres: n x 3 scale: 1 x 1, = outputSize / inputSize resolution: 1 x 3limitX, limitY, limitZ: 1 x 2 Outputs: centers: m x 3, m <= 3 '''centers = np.zeros_like(spheres).astype(np.float32); centers[:, 0] = spheres[:, 0] * resolution[0] / scale + limitX[0]centers[:, 1] = spheres[:, 1] * resolution[1] / scale + limitY[0]centers[:, 2] = spheres[:, 2] * resolution[2] / scale + limitZ[0]return centers# label corners to 3D Grid Voxel: corners - centers def corners_to_train(spheres, corners, scale, resolution, limitX, limitY, limitZ):'''Inputs: spheres: n x 3corners: n x 8 x 3 scale: 1 x 1, = outputSize / inputSizeresolution: 1 x 3limitX, limitY, limitZ: 1 x 2 Outputs: train_corners: m x 3, m <= n '''# 3D Grid Voxel Center => label center centers = sphere_to_center(spheres, scale, resolution, limitX, limitY, limitZ)train_corners = np.zeros_like(corners).astype(np.float32)# train corners for regression loss for index, (corner, center) in enumerate(zip(corners, centers)):train_corners[index] = corner - center return train_corners# create center and cordination for train def create_train_label(centers, corners, boxsize, scale, resolution, limitX, limitY, limitZ):'''Inputs: centers: n x 3 corners: n x 8 x 3 boxsize: n x 3 scale: 1 x 1, outputSize / inputSizeresolution: 1 x 3 limitX. limitY, limitZ: 1 x 2 Outputs: train_centers: m x 3, m <= n train_corners: m x 3, m <= n '''train_centers = center_to_sphere(centers, boxsize, scale, resolution, limitX, limitY, limitZ)train_corners = corners_to_train(train_centers, corners, scale, resolution, limitX, limitY, limitZ)return train_centers, train_corners def create_obj_map(train_centers, scale, resolution, limitX, limitY, limitZ):'''Inputs: centers: n x 3 scale: 1 x 1, outputSize / inputSizeresolution: 1 x 3limitX, limitY, limitZ: 1 x 2Outputs: obj_map: shape is scale * inputSize '''# 3D Grid sizeX = int(round((limitX[1] - limitX[0]) / resolution[0] * scale))sizeY = int(round((limitY[1] - limitY[0]) / resolution[1] * scale))sizeZ = int(round((limitZ[1] - limitZ[0]) / resolution[2] * scale))obj_map = np.zeros([sizeX, sizeY, sizeZ]) # print("sizeX, sizeY, sizeZ")# print(sizeX, sizeY, sizeZ)# objectness map: label center in objectness map where value is 1 obj_map[train_centers[:,0], train_centers[:, 1], train_centers[:, 2]] = 1; return obj_map def create_cord_map(train_centers, train_corners, scale, resolution, limitX, limitY, limitZ):'''Inputs: train_centers: n x 3 train_corners: n x 8 x 3 scale: 1 x 1, outputSize / inputSizeresolution: 1 x 3 limitX, limitY, limitZ: 1 x 2Outputs: cord_map: shape is inputSize * scale ''' # reshape train_corners: n x 8 x 3 => n x 24 corners = train_corners.reshape(train_corners.shape[0], -1) # 3D Grid sizeX = int(round((limitX[1] - limitX[0]) / resolution[0] * scale))sizeY = int(round((limitY[1] - limitY[0]) / resolution[1] * scale))sizeZ = int(round((limitZ[1] - limitZ[0]) / resolution[2] * scale))sizeD = 24cord_map = np.zeros([sizeX, sizeY, sizeZ, sizeD]) # print(train_centers)cord_map[train_centers[:,0], train_centers[:, 1], train_centers[:, 2]] = cornersreturn cord_map # kitti data interface: class kitti_3DVoxel_interface(object): def __init__(self, root_dir, objectType = 'Car', split='training', scale = 0.25, resolution = (0.2, 0.2, 0.2), limitX = (0, 80), limitY = (-40, 40), limitZ = (-2.5, 1.5)):'''Inputs: case1 root_dir: train or val. data dir, train or val.'s file struct like: root_dir->training->velodyneroot_dir->training->calibroot_dir->training->label_2 case2 root_dir: test data dir, test's file struct like: root_dir->testing->velodyneroot_dir->testing->calib Outputs: -None '''self.root_dir = root_dirself.split = splitself.object = kittiReader(self.root_dir, self.split)self.objectType = objectTypeself.scale = scaleself.resolution = resolution self.limitX = limitXself.limitY = limitYself.limitZ = limitZdef read_kitti_data(self, idx = 0): '''Inputs:idx: training or testing sample indexOutputs:voxel : inputSizeobj_map : scale * inputSizecord_map : scale * inputSize'''kitti_Object3Ds = Nonekitti_Lidar = None kitti_Calib = Noneif self.split == 'training':# read Lidar data + Lidar Label + Calib data kitti_Object3Ds = self.object.get_label_objects(idx); kitti_Lidar = self.object.get_lidar(idx); kitti_Calib = self.object.get_calibration(idx); # lidar data filter filter_fov = filter_camera_fov(kitti_Lidar) filter_range = filter_lidar_range(filter_fov, self.limitX, self.limitY, self.limitZ)# label filter centers, boxsizes, rotates = filter_label(kitti_Object3Ds, self.objectType)if centers is None:return None, None, None # label center: Notice from camera Coordination to velo. Coordination if not(kitti_Calib is None): proj_velo = proj_to_velo(kitti_Calib)[:, :3]centers = np.dot(centers, proj_velo.transpose())[:, :3] # label corners: corners = compute_3d_corners(centers, boxsizes, rotates)# print(corners)# print(corners.shape)# filter centers + corners filter_centers, filter_corners, boxsizes = filter_center_corners(centers, corners, boxsizes, self.limitX, self.limitY, self.limitZ)# print(filter_centers)# print(filter_corners)if not(filter_centers is None): # training centertrain_centers, train_corners = create_train_label(filter_centers, filter_corners, boxsizes, self.scale, self.resolution, self.limitX, self.limitY, self.limitZ)# print("filter_centers")# print(filter_centers)# print("train_centers")# print(train_centers)# obj_map / cord_map / voxel obj_map = create_obj_map(train_centers, self.scale, self.resolution, self.limitX, self.limitY, self.limitZ)cord_map = create_cord_map(train_centers, train_corners, self.scale, self.resolution, self.limitX, self.limitY, self.limitZ)voxel = lidar_to_binary_voxel(filter_range, self.resolution, self.limitX, self.limitY, self.limitZ)return voxel, obj_map, cord_mapelse: return None, None, None elif self.split == 'testing':# read Lidar Data + Calib + Data kitti_Lidar = self.object.get_lidar(idx); kitti_Calib = self.object.get_calibration(idx); # lidar data filter filter_fov = filter_camera_fov(kitti_Lidar) filter_range = filter_lidar_range(filter_fov, self.limitX, self.limitY, self.limitZ)voxel = lidar_to_binary_voxel(filter_range, self.resolution, self.limitX, self.limitY, self.limitZ)return voxelif __name__ == '__main__':data_dir = "/home/hsw/桌面/PCL_API_Doc/frustum-pointnets-master/dataset"kitti_3DVoxel = kitti_3DVoxel_interface(data_dir, objectType = 'Car', split='training', scale = 0.25, resolution = (0.2, 0.2, 0.2), limitX = (0, 80), limitY = (-40, 40), limitZ = (-2.5, 1.5))sampleIdx = 195; voxel, obj_map, cord_map = kitti_3DVoxel.read_kitti_data(sampleIdx)if not(voxel is None): print(voxel.shape)print(obj_map.shape) print(cord_map.shape)

2.3 KITTI數據讀取相關

''' Helper class and functions for loading KITTI objectsAuthor: Charles R. Qi Date: September 2017 ''' from __future__ import print_functionimport os import sys import numpy as np import cv2 from PIL import Image BASE_DIR = os.path.dirname(os.path.abspath(__file__)) ROOT_DIR = os.path.dirname(BASE_DIR) sys.path.append(os.path.join(ROOT_DIR, 'mayavi')) import kitti_util as utilstry:raw_input # Python 2 except NameError:raw_input = input # Python 3# 3D static data class kitti_object(object):'''Load and parse object data into a usable format.'''def __init__(self, root_dir, split='training'):'''root_dir contains training and testing folders'''self.root_dir = root_dirself.split = splitself.split_dir = os.path.join(root_dir, split)if split == 'training':self.num_samples = 7481elif split == 'testing':self.num_samples = 7518else:print('Unknown split: %s' % (split))exit(-1)# data dir self.image_dir = os.path.join(self.split_dir, 'image_2')self.calib_dir = os.path.join(self.split_dir, 'calib')self.lidar_dir = os.path.join(self.split_dir, 'velodyne')self.label_dir = os.path.join(self.split_dir, 'label_2')def __len__(self):return self.num_samples# read image: return image def get_image(self, idx):assert(idx<self.num_samples) img_filename = os.path.join(self.image_dir, '%06d.png'%(idx))return utils.load_image(img_filename)# read lidar: return n x 4 def get_lidar(self, idx): assert(idx<self.num_samples) lidar_filename = os.path.join(self.lidar_dir, '%06d.bin'%(idx))return utils.load_velo_scan(lidar_filename)# read calib file: def get_calibration(self, idx):assert(idx<self.num_samples) calib_filename = os.path.join(self.calib_dir, '%06d.txt'%(idx))return utils.Calibration(calib_filename)# read label def get_label_objects(self, idx):assert(idx<self.num_samples and self.split=='training') label_filename = os.path.join(self.label_dir, '%06d.txt'%(idx))return utils.read_label(label_filename)# read depth map def get_depth_map(self, idx):pass# read top_down image def get_top_down(self, idx):passclass kitti_object_video(object):''' Load data for KITTI videos '''def __init__(self, img_dir, lidar_dir, calib_dir):self.calib = utils.Calibration(calib_dir, from_video=True)self.img_dir = img_dirself.lidar_dir = lidar_dirself.img_filenames = sorted([os.path.join(img_dir, filename) \for filename in os.listdir(img_dir)])self.lidar_filenames = sorted([os.path.join(lidar_dir, filename) \for filename in os.listdir(lidar_dir)])print(len(self.img_filenames))print(len(self.lidar_filenames))#assert(len(self.img_filenames) == len(self.lidar_filenames))self.num_samples = len(self.img_filenames)def __len__(self):return self.num_samplesdef get_image(self, idx):assert(idx<self.num_samples) img_filename = self.img_filenames[idx]return utils.load_image(img_filename)def get_lidar(self, idx): assert(idx<self.num_samples) lidar_filename = self.lidar_filenames[idx]return utils.load_velo_scan(lidar_filename)def get_calibration(self, unused):return self.calibdef viz_kitti_video():video_path = os.path.join(ROOT_DIR, 'dataset/2011_09_26/')dataset = kitti_object_video(\os.path.join(video_path, '2011_09_26_drive_0023_sync/image_02/data'),os.path.join(video_path, '2011_09_26_drive_0023_sync/velodyne_points/data'),video_path)print(len(dataset))for i in range(len(dataset)):img = dataset.get_image(0)pc = dataset.get_lidar(0)Image.fromarray(img).show()draw_lidar(pc)raw_input()pc[:,0:3] = dataset.get_calibration().project_velo_to_rect(pc[:,0:3])draw_lidar(pc)raw_input()returndef show_image_with_boxes(img, objects, calib, show3d=True):''' Show image with 2D bounding boxes '''img1 = np.copy(img) # for 2d bboximg2 = np.copy(img) # for 3d bboxfor obj in objects:if obj.type=='DontCare':continuecv2.rectangle(img1, (int(obj.xmin),int(obj.ymin)),(int(obj.xmax),int(obj.ymax)), (0,255,0), 2)box3d_pts_2d, box3d_pts_3d = utils.compute_box_3d(obj, calib.P)img2 = utils.draw_projected_box3d(img2, box3d_pts_2d)Image.fromarray(img1).show()if show3d:Image.fromarray(img2).show()def get_lidar_in_image_fov(pc_velo, calib, xmin, ymin, xmax, ymax,return_more=False, clip_distance=2.0):''' Filter lidar points, keep those in image FOV '''pts_2d = calib.project_velo_to_image(pc_velo)fov_inds = (pts_2d[:,0]<xmax) & (pts_2d[:,0]>=xmin) & \(pts_2d[:,1]<ymax) & (pts_2d[:,1]>=ymin)fov_inds = fov_inds & (pc_velo[:,0]>clip_distance)imgfov_pc_velo = pc_velo[fov_inds,:]if return_more:return imgfov_pc_velo, pts_2d, fov_indselse:return imgfov_pc_velodef show_lidar_with_boxes(pc_velo, objects, calib,img_fov=False, img_width=None, img_height=None): ''' Show all LiDAR points.Draw 3d box in LiDAR point cloud (in velo coord system) '''if 'mlab' not in sys.modules: import mayavi.mlab as mlabfrom viz_util import draw_lidar_simple, draw_lidar, draw_gt_boxes3dprint(('All point num: ', pc_velo.shape[0]))fig = mlab.figure(figure=None, bgcolor=(0,0,0),fgcolor=None, engine=None, size=(1000, 500))if img_fov:pc_velo = get_lidar_in_image_fov(pc_velo, calib, 0, 0,img_width, img_height)print(('FOV point num: ', pc_velo.shape[0]))draw_lidar(pc_velo, fig=fig)for obj in objects:if obj.type=='DontCare':continue# Draw 3d bounding boxbox3d_pts_2d, box3d_pts_3d = utils.compute_box_3d(obj, calib.P) box3d_pts_3d_velo = calib.project_rect_to_velo(box3d_pts_3d)# Draw heading arrowori3d_pts_2d, ori3d_pts_3d = utils.compute_orientation_3d(obj, calib.P)ori3d_pts_3d_velo = calib.project_rect_to_velo(ori3d_pts_3d)x1,y1,z1 = ori3d_pts_3d_velo[0,:]x2,y2,z2 = ori3d_pts_3d_velo[1,:]draw_gt_boxes3d([box3d_pts_3d_velo], fig=fig)mlab.plot3d([x1, x2], [y1, y2], [z1,z2], color=(0.5,0.5,0.5),tube_radius=None, line_width=1, figure=fig)mlab.show(1)def show_lidar_on_image(pc_velo, img, calib, img_width, img_height):''' Project LiDAR points to image '''imgfov_pc_velo, pts_2d, fov_inds = get_lidar_in_image_fov(pc_velo,calib, 0, 0, img_width, img_height, True)imgfov_pts_2d = pts_2d[fov_inds,:]imgfov_pc_rect = calib.project_velo_to_rect(imgfov_pc_velo)import matplotlib.pyplot as pltcmap = plt.cm.get_cmap('hsv', 256)cmap = np.array([cmap(i) for i in range(256)])[:,:3]*255for i in range(imgfov_pts_2d.shape[0]):depth = imgfov_pc_rect[i,2]color = cmap[int(640.0/depth),:]cv2.circle(img, (int(np.round(imgfov_pts_2d[i,0])),int(np.round(imgfov_pts_2d[i,1]))),2, color=tuple(color), thickness=-1)Image.fromarray(img).show() return imgdef dataset_viz():dataset = kitti_object(os.path.join(ROOT_DIR, 'dataset/KITTI/object'))for data_idx in range(len(dataset)):# Load data from datasetobjects = dataset.get_label_objects(data_idx)objects[0].print_object()img = dataset.get_image(data_idx)img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img_height, img_width, img_channel = img.shapeprint(('Image shape: ', img.shape))pc_velo = dataset.get_lidar(data_idx)[:,0:3]calib = dataset.get_calibration(data_idx)# Draw 2d and 3d boxes on imageshow_image_with_boxes(img, objects, calib, False)raw_input()# Show all LiDAR points. Draw 3d box in LiDAR point cloudshow_lidar_with_boxes(pc_velo, objects, calib, True, img_width, img_height)raw_input()if __name__=='__main__':import mayavi.mlab as mlabfrom viz_util import draw_lidar_simple, draw_lidar, draw_gt_boxes3ddataset_viz()


""" Helper methods for loading and parsing KITTI data.Author: Charles R. Qi Date: September 2017 """ from __future__ import print_functionimport numpy as np import cv2 import osclass Object3d(object):''' 3d object label '''def __init__(self, label_file_line):data = label_file_line.split(' ')data[1:] = [float(x) for x in data[1:]]# extract label, truncation, occlusionself.type = data[0] # 'Car', 'Pedestrian', ...self.truncation = data[1] # truncated pixel ratio [0..1]self.occlusion = int(data[2]) # 0=visible, 1=partly occluded, 2=fully occluded, 3=unknownself.alpha = data[3] # object observation angle [-pi..pi]# extract 2d bounding box in 0-based coordinatesself.xmin = data[4] # leftself.ymin = data[5] # topself.xmax = data[6] # rightself.ymax = data[7] # bottomself.box2d = np.array([self.xmin,self.ymin,self.xmax,self.ymax])# extract 3d bounding box informationself.h = data[8] # box heightself.w = data[9] # box widthself.l = data[10] # box length (in meters)self.t = (data[11],data[12],data[13]) # location (x,y,z) in camera coord.self.ry = data[14] # yaw angle (around Y-axis in camera coordinates) [-pi..pi]def print_object(self):print('Type, truncation, occlusion, alpha: %s, %d, %d, %f' % \(self.type, self.truncation, self.occlusion, self.alpha))print('2d bbox (x0,y0,x1,y1): %f, %f, %f, %f' % \(self.xmin, self.ymin, self.xmax, self.ymax))print('3d bbox h,w,l: %f, %f, %f' % \(self.h, self.w, self.l))print('3d bbox location, ry: (%f, %f, %f), %f' % \(self.t[0],self.t[1],self.t[2],self.ry))class Calibration(object):''' Calibration matrices and utils3d XYZ in <label>.txt are in rect camera coord.2d box xy are in image2 coordPoints in <lidar>.bin are in Velodyne coord.y_image2 = P^2_rect * x_recty_image2 = P^2_rect * R0_rect * Tr_velo_to_cam * x_velox_ref = Tr_velo_to_cam * x_velox_rect = R0_rect * x_refP^2_rect = [f^2_u, 0, c^2_u, -f^2_u b^2_x;0, f^2_v, c^2_v, -f^2_v b^2_y;0, 0, 1, 0]= K * [1|t]image2 coord:----> x-axis (u)||v y-axis (v)velodyne coord:front x, left y, up zrect/ref camera coord:right x, down y, front zRef (KITTI paper): http://www.cvlibs.net/publications/Geiger2013IJRR.pdfTODO(rqi): do matrix multiplication only once for each projection.'''def __init__(self, calib_filepath, from_video=False):if from_video:calibs = self.read_calib_from_video(calib_filepath)else:calibs = self.read_calib_file(calib_filepath)# Projection matrix from rect camera coord to image2 coordself.P = calibs['P2'] self.P = np.reshape(self.P, [3,4])# Rigid transform from Velodyne coord to reference camera coordself.V2C = calibs['Tr_velo_to_cam']self.V2C = np.reshape(self.V2C, [3,4])self.C2V = inverse_rigid_trans(self.V2C)# Rotation from reference camera coord to rect camera coordself.R0 = calibs['R0_rect']self.R0 = np.reshape(self.R0,[3,3])# Camera intrinsics and extrinsicsself.c_u = self.P[0,2]self.c_v = self.P[1,2]self.f_u = self.P[0,0]self.f_v = self.P[1,1]self.b_x = self.P[0,3]/(-self.f_u) # relative self.b_y = self.P[1,3]/(-self.f_v)def read_calib_file(self, filepath):''' Read in a calibration file and parse into a dictionary.Ref: https://github.com/utiasSTARS/pykitti/blob/master/pykitti/utils.py'''data = {}with open(filepath, 'r') as f:for line in f.readlines():line = line.rstrip()if len(line)==0: continuekey, value = line.split(':', 1)# The only non-float values in these files are dates, which# we don't care about anywaytry:data[key] = np.array([float(x) for x in value.split()])except ValueError:passreturn datadef read_calib_from_video(self, calib_root_dir):''' Read calibration for camera 2 from video calib files.there are calib_cam_to_cam and calib_velo_to_cam under the calib_root_dir'''data = {}cam2cam = self.read_calib_file(os.path.join(calib_root_dir, 'calib_cam_to_cam.txt'))velo2cam = self.read_calib_file(os.path.join(calib_root_dir, 'calib_velo_to_cam.txt'))Tr_velo_to_cam = np.zeros((3,4))Tr_velo_to_cam[0:3,0:3] = np.reshape(velo2cam['R'], [3,3])Tr_velo_to_cam[:,3] = velo2cam['T']data['Tr_velo_to_cam'] = np.reshape(Tr_velo_to_cam, [12])data['R0_rect'] = cam2cam['R_rect_00']data['P2'] = cam2cam['P_rect_02']return datadef cart2hom(self, pts_3d):''' Input: nx3 points in CartesianOupput: nx4 points in Homogeneous by pending 1'''n = pts_3d.shape[0]pts_3d_hom = np.hstack((pts_3d, np.ones((n,1))))return pts_3d_hom# =========================== # ------- 3d to 3d ---------- # =========================== def project_velo_to_ref(self, pts_3d_velo):pts_3d_velo = self.cart2hom(pts_3d_velo) # nx4return np.dot(pts_3d_velo, np.transpose(self.V2C))def project_ref_to_velo(self, pts_3d_ref):pts_3d_ref = self.cart2hom(pts_3d_ref) # nx4return np.dot(pts_3d_ref, np.transpose(self.C2V))def project_rect_to_ref(self, pts_3d_rect):''' Input and Output are nx3 points '''return np.transpose(np.dot(np.linalg.inv(self.R0), np.transpose(pts_3d_rect)))def project_ref_to_rect(self, pts_3d_ref):''' Input and Output are nx3 points '''return np.transpose(np.dot(self.R0, np.transpose(pts_3d_ref)))def project_rect_to_velo(self, pts_3d_rect):''' Input: nx3 points in rect camera coord.Output: nx3 points in velodyne coord.''' pts_3d_ref = self.project_rect_to_ref(pts_3d_rect)return self.project_ref_to_velo(pts_3d_ref)def project_velo_to_rect(self, pts_3d_velo):pts_3d_ref = self.project_velo_to_ref(pts_3d_velo)return self.project_ref_to_rect(pts_3d_ref)# =========================== # ------- 3d to 2d ---------- # =========================== def project_rect_to_image(self, pts_3d_rect):''' Input: nx3 points in rect camera coord.Output: nx2 points in image2 coord.'''pts_3d_rect = self.cart2hom(pts_3d_rect)pts_2d = np.dot(pts_3d_rect, np.transpose(self.P)) # nx3pts_2d[:,0] /= pts_2d[:,2]pts_2d[:,1] /= pts_2d[:,2]return pts_2d[:,0:2]def project_velo_to_image(self, pts_3d_velo):''' Input: nx3 points in velodyne coord.Output: nx2 points in image2 coord.'''pts_3d_rect = self.project_velo_to_rect(pts_3d_velo)return self.project_rect_to_image(pts_3d_rect)# =========================== # ------- 2d to 3d ---------- # =========================== def project_image_to_rect(self, uv_depth):''' Input: nx3 first two channels are uv, 3rd channelis depth in rect camera coord.Output: nx3 points in rect camera coord.'''n = uv_depth.shape[0]x = ((uv_depth[:,0]-self.c_u)*uv_depth[:,2])/self.f_u + self.b_xy = ((uv_depth[:,1]-self.c_v)*uv_depth[:,2])/self.f_v + self.b_ypts_3d_rect = np.zeros((n,3))pts_3d_rect[:,0] = xpts_3d_rect[:,1] = ypts_3d_rect[:,2] = uv_depth[:,2]return pts_3d_rectdef project_image_to_velo(self, uv_depth):pts_3d_rect = self.project_image_to_rect(uv_depth)return self.project_rect_to_velo(pts_3d_rect)def rotx(t):''' 3D Rotation about the x-axis. '''c = np.cos(t)s = np.sin(t)return np.array([[1, 0, 0],[0, c, -s],[0, s, c]])def roty(t):''' Rotation about the y-axis. '''c = np.cos(t)s = np.sin(t)return np.array([[c, 0, s],[0, 1, 0],[-s, 0, c]])def rotz(t):''' Rotation about the z-axis. '''c = np.cos(t)s = np.sin(t)return np.array([[c, -s, 0],[s, c, 0],[0, 0, 1]])def transform_from_rot_trans(R, t):''' Transforation matrix from rotation matrix and translation vector. '''R = R.reshape(3, 3)t = t.reshape(3, 1)return np.vstack((np.hstack([R, t]), [0, 0, 0, 1]))def inverse_rigid_trans(Tr):''' Inverse a rigid body transform matrix (3x4 as [R|t])[R'|-R't; 0|1]'''inv_Tr = np.zeros_like(Tr) # 3x4inv_Tr[0:3,0:3] = np.transpose(Tr[0:3,0:3])inv_Tr[0:3,3] = np.dot(-np.transpose(Tr[0:3,0:3]), Tr[0:3,3])return inv_Trdef read_label(label_filename):lines = [line.rstrip() for line in open(label_filename)]objects = [Object3d(line) for line in lines]return objectsdef load_image(img_filename):return cv2.imread(img_filename)def load_velo_scan(velo_filename):scan = np.fromfile(velo_filename, dtype=np.float32)scan = scan.reshape((-1, 4))return scandef project_to_image(pts_3d, P):''' Project 3d points to image plane.Usage: pts_2d = projectToImage(pts_3d, P)input: pts_3d: nx3 matrixP: 3x4 projection matrixoutput: pts_2d: nx2 matrixP(3x4) dot pts_3d_extended(4xn) = projected_pts_2d(3xn)=> normalize projected_pts_2d(2xn)<=> pts_3d_extended(nx4) dot P'(4x3) = projected_pts_2d(nx3)=> normalize projected_pts_2d(nx2)'''n = pts_3d.shape[0]pts_3d_extend = np.hstack((pts_3d, np.ones((n,1))))print(('pts_3d_extend shape: ', pts_3d_extend.shape))pts_2d = np.dot(pts_3d_extend, np.transpose(P)) # nx3pts_2d[:,0] /= pts_2d[:,2]pts_2d[:,1] /= pts_2d[:,2]return pts_2d[:,0:2]# corners_2d + corners_3d def compute_box_3d(obj, P):''' Takes an object and a projection matrix (P) and projects the 3dbounding box into the image plane.Returns:corners_2d: (8,2) array in left image coord.corners_3d: (8,3) array in in rect camera coord.'''# compute rotational matrix around yaw axisR = roty(obj.ry) # 3d bounding box dimensionsl = obj.l;w = obj.w;h = obj.h;# 3d bounding box cornersx_corners = [l/2,l/2,-l/2,-l/2,l/2,l/2,-l/2,-l/2];y_corners = [0,0,0,0,-h,-h,-h,-h];z_corners = [w/2,-w/2,-w/2,w/2,w/2,-w/2,-w/2,w/2];# rotate and translate 3d bounding boxcorners_3d = np.dot(R, np.vstack([x_corners,y_corners,z_corners]))#print corners_3d.shapecorners_3d[0,:] = corners_3d[0,:] + obj.t[0];corners_3d[1,:] = corners_3d[1,:] + obj.t[1];corners_3d[2,:] = corners_3d[2,:] + obj.t[2];#print 'cornsers_3d: ', corners_3d # only draw 3d bounding box for objs in front of the cameraif np.any(corners_3d[2,:]<0.1):corners_2d = Nonereturn corners_2d, np.transpose(corners_3d)# project the 3d bounding box into the image planecorners_2d = project_to_image(np.transpose(corners_3d), P);#print 'corners_2d: ', corners_2dreturn corners_2d, np.transpose(corners_3d)def compute_orientation_3d(obj, P):''' Takes an object and a projection matrix (P) and projects the 3dobject orientation vector into the image plane.Returns:orientation_2d: (2,2) array in left image coord.orientation_3d: (2,3) array in in rect camera coord.'''# compute rotational matrix around yaw axisR = roty(obj.ry)# orientation in object coordinate systemorientation_3d = np.array([[0.0, obj.l],[0,0],[0,0]])# rotate and translate in camera coordinate system, project in imageorientation_3d = np.dot(R, orientation_3d)orientation_3d[0,:] = orientation_3d[0,:] + obj.t[0]orientation_3d[1,:] = orientation_3d[1,:] + obj.t[1]orientation_3d[2,:] = orientation_3d[2,:] + obj.t[2]# vector behind image plane?if np.any(orientation_3d[2,:]<0.1):orientation_2d = Nonereturn orientation_2d, np.transpose(orientation_3d)# project orientation into the image planeorientation_2d = project_to_image(np.transpose(orientation_3d), P);return orientation_2d, np.transpose(orientation_3d)def draw_projected_box3d(image, qs, color=(255,255,255), thickness=2):''' Draw 3d bounding box in imageqs: (8,3) array of vertices for the 3d box in following order:1 -------- 0/| /|2 -------- 3 .| | | |. 5 -------- 4|/ |/6 -------- 7'''qs = qs.astype(np.int32)for k in range(0,4):# Ref: http://docs.enthought.com/mayavi/mayavi/auto/mlab_helper_functions.htmli,j=k,(k+1)%4# use LINE_AA for opencv3cv2.line(image, (qs[i,0],qs[i,1]), (qs[j,0],qs[j,1]), color, thickness, cv2.CV_AA)i,j=k+4,(k+1)%4 + 4cv2.line(image, (qs[i,0],qs[i,1]), (qs[j,0],qs[j,1]), color, thickness, cv2.CV_AA)i,j=k,k+4cv2.line(image, (qs[i,0],qs[i,1]), (qs[j,0],qs[j,1]), color, thickness, cv2.CV_AA)return image

3. 通過測試還在訓練,但是我的硬件設備較差,所以,訓練速度比較慢


總結

以上是生活随笔為你收集整理的深度学习——3D Fully Convolutional Network for Vehicle Detection in Point Cloud模型实现的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

91精品国产欧美一区二区 | 久草视频在线免费看 | 亚洲午夜精品一区二区三区电影院 | 国产精品系列在线 | 国产精品国产三级国产aⅴ入口 | 欧美二区在线播放 | 国产日本在线播放 | 草在线| 一区二区不卡视频在线观看 | 免费在线观看av的网站 | 97av影院 | 色天天综合网 | 亚洲精品高清一区二区三区四区 | 99在线观看视频 | avlulu久久精品| 国产精成人品免费观看 | 久久精品中文字幕 | 久久久久综合精品福利啪啪 | 欧美精品一级视频 | 国产精品video | 日韩一二区在线 | 中文字幕免费 | 亚洲美女在线国产 | 日韩精品一区电影 | 国产视频亚洲视频 | 久99久中文字幕在线 | 久久综合成人 | 在线视频91 | 一区二区三区 中文字幕 | 91精品国产三级a在线观看 | 国产精品成人自拍 | 久久日本视频 | 人人插人人看 | 久久久高清免费视频 | 国产三级视频在线 | 一区二区三区日韩在线观看 | 国产精品视频在线看 | 天天爱天天操天天射 | 色激情在线| 日韩电影中文 | 成人网页在线免费观看 | 一区二区三区免费在线观看视频 | 日韩在线观看一区 | 丁香六月五月婷婷 | 亚洲视频h | 日韩色在线观看 | 国产精品福利小视频 | 久久久国产精品人人片99精片欧美一 | 高清免费在线视频 | 国产精品亚洲a | 综合色站导航 | 久草久草视频 | 国产精品久久久久久久免费 | 色在线中文字幕 | 久久久久国产精品www | 午夜 免费 | 天堂在线免费视频 | 国产又粗又硬又爽视频 | 在线观看国产91 | 免费观看丰满少妇做爰 | 久章草在线| 国产精品国产亚洲精品看不卡 | 日韩欧美一区二区三区在线观看 | 亚洲精品日韩一区二区电影 | 中文字幕在线国产精品 | 成人黄色大片在线观看 | 不卡中文字幕av | 成人亚洲精品久久久久 | 久久免费精品国产 | 国产精品成人av电影 | 国产毛片在线 | 天天艹天天 | 日韩特级黄色片 | 国产成人三级在线播放 | 97免费| 99人久久精品视频最新地址 | 二区在线播放 | 日日操操| 成人高清av在线 | 成人毛片一区 | 91视频啊啊啊 | 五月天久久久久 | 丁香婷婷激情 | 午夜精品视频一区二区三区在线看 | 亚洲v欧美v国产v在线观看 | 成人在线视频免费观看 | 丁香婷婷基地 | 亚洲久在线 | 一级国产视频 | 92精品国产成人观看免费 | 成年人免费av | 国产日韩精品欧美 | 日韩欧三级 | 天干啦夜天干天干在线线 | 国产福利一区二区在线 | 97av在线视频免费播放 | 久草网视频在线观看 | 狠狠干狠狠久久 | 欧美久久久久久久 | 久久综合久久88 | 中文在线免费视频 | 久久久www成人免费精品 | 色综合天天综合 | 国产99久久久久久免费看 | 在线av资源 | 欧美激情精品 | 久久综合婷婷 | 国产精品美女在线 | av在线h| 免费看毛片在线 | 99精品福利视频 | 色婷婷色 | 午夜精品一区二区三区在线播放 | av黄网站 | 午夜久久久久久久 | 美女网站久久 | 成人97人人超碰人人99 | 久久久久久久久久久网站 | 亚洲综合情 | 亚洲精品午夜久久久久久久 | 国产精品乱码久久久久久1区2区 | 成人国产网址 | 亚洲精品1区2区3区 超碰成人网 | 欧美动漫一区二区三区 | 欧洲精品视频一区 | 夜夜爽夜夜操 | 日韩a级免费视频 | 国产1区2区3区在线 亚洲自拍偷拍色图 | 亚洲欧美日韩精品一区二区 | 免费精品视频在线观看 | 91片在线观看 | 久久久91精品国产一区二区三区 | 青青五月天 | 久久久精品欧美 | 色悠悠久久综合 | 久久韩国免费视频 | 日韩有码欧美 | 九九热精品在线 | 国产麻豆精品久久 | 91在线公开视频 | 中文字幕在线视频精品 | 国产很黄很色的视频 | 91视频在线观看下载 | 久久久久久久久久久久亚洲 | 手机成人av在线 | 亚洲激情p| 成人精品一区二区三区中文字幕 | 成人在线播放免费观看 | 日韩av不卡在线 | 久久艹影院| 18国产精品白浆在线观看免费 | 亚洲视频每日更新 | 久久久久久久久久久久久国产精品 | 成人xxxx | 国产日韩精品一区二区三区在线 | 美女在线国产 | 三级动态视频在线观看 | 国产一级免费观看视频 | 亚洲午夜久久久久久久久 | 国产无套精品久久久久久 | 精品九九九 | 欧美精品在线观看一区 | 欧美成人91 | 中文字幕的| 日韩最新在线视频 | 99久久婷婷国产综合精品 | 日韩一区正在播放 | a黄色一级片| 日日摸日日添夜夜爽97 | 国产在线播放观看 | 91电影福利 | 青青色影院 | 在线看污网站 | 日本黄色黄网站 | 国产精品久久久视频 | 日韩精品中文字幕一区二区 | 天天拍天天草 | 麻豆系列在线观看 | 99精品视频网 | 美女久久久久 | 国产一区二区三区高清播放 | 三级黄色理论片 | 深爱激情婷婷网 | 国产精品va在线 | 在线观看91视频 | 国产在线一卡 | 91最新视频 | 成人在线免费看视频 | 久久久久伦理电影 | 亚洲视频在线观看网站 | 国产精品黄网站在线观看 | 国产一区国产精品 | 波多野结衣在线播放视频 | 久久av影院 | 视色网站 | 久热电影| 插婷婷 | 亚洲成人黄色网址 | 黄色免费看片网站 | 中文字幕资源在线 | 国产亚洲视频系列 | 久久久色 | 91黄色小视频 | 亚洲情感电影大片 | 国产一级二级在线观看 | 成人a视频在线观看 | 国产一区二区视频在线 | 精品国产伦一区二区三区观看方式 | 在线成人免费 | 国偷自产中文字幕亚洲手机在线 | 成人久久18免费网站麻豆 | 色综合 久久精品 | 欧美 国产 视频 | 视频在线观看91 | 国产精品久久久久久一区二区 | 日韩专区在线播放 | 99在线精品免费视频九九视 | 日本精品视频网站 | 毛片网在线 | av动态图片| 97日日| 日韩欧美在线免费观看 | 丁香狠狠 | 综合天天色 | 韩国av一区二区三区 | 亚洲精品美女在线观看 | 狠狠色伊人亚洲综合网站色 | 亚洲精品国产拍在线 | 色免费在线 | 一区 在线观看 | 亚洲国产大片 | 五月天婷亚洲天综合网鲁鲁鲁 | 黄色精品久久久 | 夜夜躁天天躁很躁波 | 精品国产一区二区三区在线 | 国产伦理剧 | 国产精品久久久久久久久久了 | 在线观看视频黄 | 99视频在线观看视频 | 偷拍精偷拍精品欧洲亚洲网站 | www.人人草 | 国产精品乱码久久 | 色婷婷综合久久久久中文字幕1 | 韩日精品中文字幕 | 激情深爱五月 | 欧美性生活一级片 | 精品亚洲一区二区 | 日韩在线视 | 国产精品久久久久一区二区 | 国产自产高清不卡 | 国产精品高清在线观看 | 久久精品欧美一区 | 欧美国产大片 | 91高清免费在线观看 | 久久免费影院 | 日韩免费不卡视频 | 亚洲黄色在线播放 | 中文久久精品 | 久久字幕网 | 久久丁香网| 欧美一级免费 | 99欧美视频 | 99久久99久久免费精品蜜臀 | 91精品一区二区三区蜜桃 | 最新国产精品视频 | 日韩91在线 | 国产精品久久久一区二区三区网站 | 久久激情综合 | 成人av动漫在线观看 | jizzjizzjizz亚洲 | 亚洲va天堂va欧美ⅴa在线 | 激情视频在线高清看 | 成年人免费在线观看 | 国产999久久久 | 97超在线| 久久久国产精品一区二区三区 | 日韩欧美视频在线观看免费 | 国产成人久久久久 | 国产一区精品在线观看 | 麻豆精品在线 | 在线视频 日韩 | 国产福利av在线 | 久久神马影院 | 91麻豆精品国产91久久久无限制版 | 国产精品欧美久久久久无广告 | 亚洲精品欧美专区 | 亚洲国产av精品毛片鲁大师 | 日本三级国产 | 国际精品久久久久 | 最近中文字幕高清字幕免费mv | 中文久草| 91久久人澡人人添人人爽欧美 | 伊人中文网 | 久久经典国产视频 | 国产精品精品久久久久久 | 国内外成人在线视频 | 天天激情综合网 | 狠狠五月天| 久久超级碰 | 成人毛片久久 | 日本中文字幕视频 | 91中文字幕在线视频 | 成人福利在线观看 | 久99久精品 | 久久亚洲人 | 99免在线观看免费视频高清 | 偷拍精品一区二区三区 | 美女久久99 | 久久久久久国产精品亚洲78 | 国产自制av| 国产精品淫 | 国产精品久久久区三区天天噜 | 国产中文视频 | 欧美色道| 久久免费资源 | 黄a网 | 一区二区三区电影在线播 | 98超碰在线观看 | 韩国在线视频一区 | 中文字幕免费成人 | 亚洲va欧美va人人爽春色影视 | 国内精品国产三级国产aⅴ久 | 中文字幕制服丝袜av久久 | 亚洲精品黄色在线观看 | 成人在线免费视频 | 久久久久免费电影 | 国产护士hd高朝护士1 | 亚洲黄色免费在线看 | 欧美午夜剧场 | 日本字幕网 | 探花视频免费观看 | 美女福利视频 | 欧美性爽爽 | 国产一级片网站 | 爱色av.com | 国产精品v欧美精品 | 亚洲欧洲精品在线 | 免费观看一级成人毛片 | 国产日韩中文在线 | 男女男视频 | 99国产在线视频 | 免费av片在线 | 久久久久久免费网 | 久久久久免费电影 | 色婷婷综合久色 | 美女福利视频一区二区 | 九九色在线观看 | www.久久色| 在线观看av免费 | 欧美一区二区免费在线观看 | 黄色大片入口 | 99免费看片 | 玖玖玖在线观看 | 欧美老女人xx | 久久婷婷一区二区三区 | 久久精品国产免费看久久精品 | 欧美日韩在线第一页 | 国产真实精品久久二三区 | 欧美久久成人 | 日日婷婷夜日日天干 | 国产精品v欧美精品 | 天堂va在线观看 | 波多野结衣一区三区 | 久久九九久久精品 | 最近在线中文字幕 | 黄色电影网站在线观看 | 亚洲最新在线 | 插婷婷 | 亚洲免费精品一区二区 | 日韩午夜在线观看 | av网址在线播放 | 日韩欧美一区二区三区免费观看 | 精品久久久久久久久久 | 国产精品久久久久久久久久不蜜月 | 亚洲精品2区 | 免费麻豆视频 | 青青河边草免费视频 | 就要干b| 国产精品福利久久久 | 亚洲精品1234区 | 五月婷婷丁香网 | 伊人色综合久久天天 | 免费看片网址 | 天天天射| 亚洲日韩中文字幕 | 久久视频这里只有精品 | 91亚洲精品视频 | 婷婷久久综合九色综合 | 精品在线观看视频 | 亚洲黄色app| 日韩免费电影在线观看 | 日本爽妇网 | 国产免费观看久久黄 | 四川妇女搡bbbb搡bbbb搡 | 福利二区视频 | 欧美激情精品久久久久久变态 | 国内精品久久久久影院一蜜桃 | 国产流白浆高潮在线观看 | 亚洲一区美女视频在线观看免费 | 激情大尺度视频 | a级国产乱理论片在线观看 特级毛片在线观看 | www.天天干 | av在线收看 | 免费又黄又爽视频 | 国产成人亚洲在线观看 | www欧美xxxx| 欧美精品久久久久久久久久 | 麻豆国产在线播放 | 黄色成人av网址 | 国产你懂的在线 | 免费高清看电视网站 | 最新av中文字幕 | av电影免费在线看 | 亚洲综合在线播放 | 国产在线精品一区 | 亚洲精品乱码久久久久久蜜桃91 | 久久综合免费视频 | 成人91在线| 欧美日韩国产在线观看 | 在线播放91 | 国产精品久久在线 | 成人一区二区三区在线 | 欧美射射射 | 三级大片网站 | 久久成人午夜 | 国产午夜三级一区二区三 | 免费在线观看污 | 欧美91精品国产自产 | 国产精品免费小视频 | 99在线热播精品免费99热 | av在线免费在线观看 | 99久久99久久免费精品蜜臀 | 国产超碰在线观看 | 久久天天操 | 高清中文字幕av | 97高清视频| 九七视频在线观看 | 国产中文字幕在线 | 久久免费大片 | www.五月婷婷 | 亚洲成av人片一区二区梦乃 | 久久免费黄色大片 | 91爱爱电影| 婷婷综合亚洲 | www.av在线播放 | 婷婷六月天丁香 | 人人爱夜夜操 | 搡bbbb搡bbb视频 | 中文字幕精品三级久久久 | 日韩精品一区在线播放 | 亚洲一区欧美激情 | 懂色av一区二区三区蜜臀 | 91网在线看 | 美女视频免费精品 | 黄色免费av | 亚洲人视频在线 | 日韩欧美在线观看一区二区三区 | a级黄色片视频 | 国产一区二区三区免费在线 | 亚洲国产大片 | 91成版人在线观看入口 | av黄色av| www.日日日.com| 成人久久久久久久久久 | 国产一级片一区二区三区 | av日韩不卡 | 91精品国产综合久久福利不卡 | 中文字幕一区二区三区乱码在线 | 国产精品丝袜 | 日韩av午夜在线观看 | 国产精品国产三级国产不产一地 | 国产一区在线视频 | 欧美国产日韩一区 | 国产手机av | 国产成人高清 | 中文字幕在线观看三区 | 日本中文字幕电影在线免费观看 | 丁香花中文在线免费观看 | 国产精品不卡av | 探花视频在线观看 | 亚洲无线视频 | 国产黄色一级大片 | 婷婷六月天丁香 | 亚洲资源片 | 日韩一区二区免费播放 | 精品在线小视频 | 欧美日韩免费观看一区=区三区 | 色视频网站在线观看一=区 a视频免费在线观看 | 精品资源在线 | 国产亚洲精品成人av久久影院 | 成人午夜精品福利免费 | 丁香在线观看完整电影视频 | 97超碰资源网 | 五月婷婷色综合 | 久久高清国产视频 | 成年人免费电影在线观看 | 人人添人人澡人人澡人人人爽 | 在线三级播放 | 欧美精彩视频 | 又爽又黄又刺激的视频 | 婷婷色综| 日韩簧片在线观看 | 亚洲网久久| 欧美天天综合网 | 久久久久久久久亚洲精品 | 粉嫩高清一区二区三区 | 91免费在线看片 | 人人爽人人插 | 成人午夜电影网 | 久久久电影 | 久久手机视频 | av在线精品| 91在线入口 | 欧美一级乱黄 | 狠狠色伊人亚洲综合网站野外 | 九九有精品 | 日本最新一区二区三区 | 国产亚洲精品成人av久久ww | 一区二区激情视频 | 超碰人人舔 | 欧美一级性生活片 | 天天爱天天干天天爽 | 亚洲影视九九影院在线观看 | 国产91在线看 | 免费观看视频黄 | 成人app在线免费观看 | 夜夜操天天摸 | 国产97av| 国产成人精品久久久 | 国产成人在线看 | 精品91久久久久 | 色视频在线免费观看 | 精品色综合 | 99久国产| www.av在线播放 | 久久丁香 | 国产91探花 | 99久久9| 免费看黄色大全 | 欧美激情精品久久久久久免费印度 | 久久专区 | 国产午夜精品福利视频 | 国产在线91精品 | 国产一区免费 | 蜜臀aⅴ国产精品久久久国产 | 毛片a级片 | 天天射天天舔天天干 | 国产成人91 | 日韩在线免费视频 | 日韩精品久久久久久 | 日韩av中文字幕在线免费观看 | 亚洲国内精品 | 久精品在线观看 | 激情校园亚洲 | 黄色一区二区在线观看 | 国产精品激情 | 正在播放国产精品 | av千婊在线免费观看 | 国产精品密入口果冻 | 国产精品久久久久久久久久久久午夜 | 国产精品久久久久久久久久了 | 久久99久久99精品免视看婷婷 | 国产热re99久久6国产精品 | 精品国产一区二区三区久久久蜜月 | 丁香六月色 | 精品一区在线 | 黄色一级片视频 | 五月天免费网站 | 亚洲高清网站 | 黄av资源| 8090yy亚洲精品久久 | 亚洲精品视频免费看 | 韩国在线一区 | 国产精品久久久久久久久久久免费 | 日韩av福利在线 | 成人禁用看黄a在线 | 中文永久免费观看 | 国产日产欧美在线观看 | av在线免费网站 | 国产中文字幕在线播放 | 日韩欧美视频免费观看 | 精品国产伦一区二区三区观看体验 | 国产在线自| 久久精品欧美一区二区三区麻豆 | av福利免费 | 欧美精品v国产精品v日韩精品 | 欧美精品久久久久久久免费 | 欧美精品在线观看一区 | 日韩电影中文字幕在线观看 | 日韩天天干 | www.97色.com | 日韩视频精品在线 | a视频在线观看免费 | 精品国产伦一区二区三区观看方式 | 天天鲁一鲁摸一摸爽一爽 | 国产我不卡 | 蜜臀av麻豆 | 色视频 在线 | 免费三级在线 | av黄在线播放 | 国产精品一区二区在线观看 | 99精品久久只有精品 | 欧美成人精品在线 | 久久久综合色 | 国产免费观看久久黄 | 69性欧美| 99r在线视频 | 亚洲国产av精品毛片鲁大师 | 天天干天天操av | 免费看三片 | 亚洲激情av | 青春草免费视频 | 欧美在线aa | 亚洲乱码精品 | 黄色福利视频网站 | 欧美超碰在线 | 在线精品一区二区 | 91传媒在线播放 | 久久99久国产精品黄毛片入口 | 国产亚洲精品精品精品 | 国产一区在线免费观看 | 国偷自产中文字幕亚洲手机在线 | 天天干,天天射,天天操,天天摸 | av亚洲产国偷v产偷v自拍小说 | 啪啪av在线| 2019精品手机国产品在线 | 亚洲一级二级 | 亚洲美女久久 | 国产一及片 | 久久久激情视频 | 日韩欧美视频一区二区三区 | 久久99精品久久久久蜜臀 | 日日添夜夜添 | 天天综合成人 | 欧美性另类 | 国产高清av免费在线观看 | 成人h视频 | 久久久久在线观看 | av丁香花| 天天干天天干 | 日韩在线观看你懂的 | 久久99热精品这里久久精品 | 国产色一区 | 永久免费毛片在线观看 | 精品在线观看国产 | 精品久操| 亚洲激情六月 | 在线国产视频一区 | 伊在线视频| 亚洲一二区精品 | 麻豆国产精品一区二区三区 | 久久爽久久爽久久av东京爽 | 久久久91精品国产 | 日韩网站在线免费观看 | 欧美精品久久久久久久久免 | 91精品国产福利在线观看 | 人人爽爽人人 | 视频在线91 | 国产做aⅴ在线视频播放 | 久久久www成人免费精品 | 天天综合网入口 | 一级免费黄视频 | 久久久久99精品成人片三人毛片 | 亚洲精品国产精品国自产 | 亚洲一区欧美精品 | 久久久久久久久久久网站 | 午夜精品久久久久久99热明星 | 一区二区三区四区久久 | 福利一区在线 | 欧美日韩国产一二 | 中文字幕网站视频在线 | 色干综合 | 国产精品视频免费看 | 懂色av一区二区在线播放 | 国产理论一区二区三区 | 国产最顶级的黄色片在线免费观看 | 国产精品9999久久久久仙踪林 | 国内精品视频在线 | 在线观看亚洲视频 | 特黄免费av | 亚洲夜夜网| 国产91精品欧美 | 丁香六月婷婷开心 | 日韩免费一二三区 | 99在线精品免费视频九九视 | 黄色电影网站在线观看 | 国产呻吟在线 | 99精品欧美一区二区三区 | 狠狠色狠狠色综合日日92 | 久久精品婷婷 | 在线观看免费 | 国产欧美精品在线观看 | 免费在线观看av网站 | 91麻豆精品91久久久久同性 | 亚洲激情五月 | 一区三区在线欧 | a视频在线观看免费 | 99精品视频网站 | 人人干人人干人人干 | 一级黄色大片在线观看 | 欧美日韩免费一区二区三区 | 久久天| www.久久精品视频 | 91黄色小视频 | 亚洲片在线资源 | 中文字幕中文字幕在线中文字幕三区 | 国产一区私人高清影院 | 久久国产精品久久精品 | 天天操夜夜操天天射 | 丁香激情网 | 91在线看视频 | 日韩国产精品久久久久久亚洲 | 99精品国产一区二区三区不卡 | 欧美一级裸体视频 | 免费在线观看av不卡 | 国产护士hd高朝护士1 | 午夜99| 二区三区av| 国产免费叼嘿网站免费 | 国产成人精品在线观看 | 99免费精品视频 | av天天色| 国色天香第二季 | av观看免费在线 | 国产一级久久 | 黄色三级在线看 | 91插插视频| 一区二区激情视频 | 色五婷婷 | 婷婷激情网站 | 欧美一区二区免费在线观看 | 国产不卡一二三区 | 日本在线视频网址 | 欧美一二区在线 | 久久一区二区三区国产精品 | av中文在线影视 | 中文字幕在线播放第一页 | 国产精品igao视频网网址 | 在线电影日韩 | 亚洲免费一级电影 | 免费成人结看片 | 久射网 | 草草草影院 | 欧美日韩一级久久久久久免费看 | 99久在线精品99re8热视频 | 91中文字幕一区 | 日韩av中文| 五月天国产精品 | 亚洲va在线va天堂 | 国产精品久久久久久久免费观看 | 欧美日韩在线免费观看 | 丝袜美腿亚洲综合 | 亚洲成av人片在线观看无 | 免费国产在线视频 | 日韩精品视频免费专区在线播放 | 久久天天躁夜夜躁狠狠躁2022 | 狠狠色伊人亚洲综合成人 | 麻豆视频免费网站 | 免费观看成人网 | 91精品国产综合久久婷婷香蕉 | 91传媒在线观看 | 欧美精品久久久 | www亚洲视频| 欧美成年人在线视频 | 日本公乱妇视频 | 日韩av成人在线观看 | 久久亚洲美女 | 欧美在线a视频 | 日韩中文在线观看 | 成人黄色电影在线观看 | a级国产乱理伦片在线播放 久久久久国产精品一区 | 区一区二区三在线观看 | 蜜桃麻豆www久久囤产精品 | 高清av免费一区中文字幕 | 成人在线观看影院 | 亚洲激情在线 | 国产 在线观看 | 一本之道乱码区 | 91在线永久 | 91精品国产高清自在线观看 | 欧美视频18 | 一级特黄aaa大片在线观看 | 国产成人精品亚洲日本在线观看 | 亚洲日本精品视频 | 国产 日韩 欧美 自拍 | 国产高清视频在线播放一区 | 99色网站| 国产一区视频在线 | 国产精品电影一区 | 国产不卡网站 | 波多野结衣精品在线 | 欧美激情h | 久久免费试看 | 久久久资源网 | 国产精品久久久久久久婷婷 | 成人cosplay福利网站 | 国产视频在线观看一区 | 又黄又爽又无遮挡免费的网站 | 九九一级片 | 在线国产一区二区三区 | 中文字幕一二 | 国产精品久久久久久久久久久久 | 黄色av电影一级片 | 中文字幕在线看 | 久久免费视频8 | 午夜久久福利 | 美女视频黄的免费的 | 狠狠黄 | 欧美坐爱视频 | 国产精品自产拍在线观看网站 | www.一区二区三区 | 成人在线免费视频 | 久久艹在线 | 日本性生活免费看 | 国产精品久久久久久久久大全 | 婷婷网址| 91麻豆免费版| 久久99深爱久久99精品 | 国产午夜av | 日本黄色免费大片 | 人人插人人舔 | 美女国产 | 成 人 黄 色视频免费播放 | 亚洲成人资源在线 | 国产亚洲精品女人久久久久久 | 亚洲免费精品一区二区 | 日韩剧情| 人人干狠狠干 | 天天曰| 一级特黄aaa大片在线观看 | 又黄又刺激的网站 | 国产.精品.日韩.另类.中文.在线.播放 | 99久热在线精品 | 亚洲色图色 | 99热 精品在线| 一区二区三区四区精品视频 | 97夜夜澡人人爽人人免费 | 天天射天天 | 十八岁以下禁止观看的1000个网站 | 在线中文字幕观看 | 国产视频2区 | 国产不卡精品 | 中文字幕日韩免费视频 | 91大神免费在线观看 | 蜜桃传媒一区二区 | 天天射天天爽 | 欧美极品少妇xbxb性爽爽视频 | 一区二区免费不卡在线 | 婷婷丁香激情网 | 久久久精品视频网站 | 韩国在线视频一区 | 亚洲永久精品国产 | 五月婷婷激情综合网 | 亚洲婷婷综合色高清在线 | 中文在线最新版天堂 | 亚洲免费精彩视频 | 亚洲精品国精品久久99热一 | 一区二区三区韩国免费中文网站 | 97人人模人人爽人人喊中文字 | 中文字幕久久网 | 麻豆av电影 | 久久看片网站 | 欧美视频国产视频 | 国产成人一区二区三区在线观看 | 国产国语在线 | 日韩午夜精品 | www.成人久久 | 国产xxxx做受性欧美88 | 午夜视频一区二区 | 国产精品久久久久久久av大片 | 国产精品第二页 | 国产免费中文字幕 | 91成人黄色| 亚洲精品国产视频 | 国产专区视频在线 | 午夜精品久久久久久久久久 | 日韩在线观看第一页 | 国产主播大尺度精品福利免费 | 亚洲欧美视频网站 | 天天射天天爱天天干 | 久久精品4 | 这里只有精品视频在线观看 | 中文字幕在线观看完整 | 一区二区三区高清 | 日本高清中文字幕有码在线 | 午夜免费在线观看 | 亚洲综合爱 | 毛片永久免费 | 欧美成人影音 | 2018亚洲男人天堂 | 在线看国产一区 | 久久九九精品久久 | www狠狠| 精品主播网红福利资源观看 | 欧美日韩性视频在线 | 国产91九色视频 | 日本中文字幕在线 | 欧美另类交人妖 | 黄污污网站 | 成人黄色电影在线 | 国产婷婷 | 天天操天天干天天插 | 日日爱影视 | 免费在线电影网址大全 | 国产精品18久久久久久久久久久久 | 久久久电影网站 | 国产精品第二十页 | 亚洲视频综合在线 | 日本中文乱码卡一卡二新区 | 草草草影院 | 日韩精品久久久免费观看夜色 | 9在线观看免费高清完整 | 91福利区一区二区三区 | 成人黄色大片在线免费观看 | 国产精品video爽爽爽爽 | 国产精品九九九九九九 | 国产精品视频不卡 | 黄色一级大片免费看 | 天天爽天天射 | 婷婷六月丁 | 中文字幕123区 | 在线国产一区 | 亚洲 欧美 日韩 综合 | 久久精品国产精品亚洲 | 日本精品视频免费观看 | 亚洲电影第一页av | 日韩欧美视频 | 国产精品中文久久久久久久 | 久久精品免费观看 | 国产午夜三级一区二区三桃花影视 | 91爱爱中文字幕 | 婷婷国产视频 | 免费观看黄 | 国产爽视频 | 日韩精品久久久久久久电影99爱 | 91在线看视频免费 | 久久欧洲视频 | 国产aa精品 | 视频一区二区视频 | 91精品久久久久久综合乱菊 | 色综合久久久久 | 亚洲一级片 | 久久99精品久久只有精品 | 国产中文字幕在线免费观看 | 激情欧美日韩一区二区 | 鲁一鲁影院 | 日韩精品国产一区 | 亚洲精品乱码久久久久久蜜桃不爽 | 久久久久在线观看 | 国产精品一区二区中文字幕 | 狠狠色狠狠色合久久伊人 | 成人午夜网址 | 久久久久久久久久国产精品 | 日韩电影在线一区二区 | 在线观看91久久久久久 | 国产精品乱码久久久久久1区2区 | 日韩精品一区二区三区高清免费 | 久久婷婷精品 | 99热这里只有精品1 av中文字幕日韩 | 在线成人中文字幕 | 国产五码一区 | 西西4444www大胆无视频 | 国产一区二区电影在线观看 | 人人爱爱人人 | 国产精品18久久久久久久久久久久 | 天天操夜夜操国产精品 | 国产精品大片在线观看 | 少妇av片 | 91最新地址永久入口 | 99久久免费看 | 亚洲伊人av | 久久艹国产视频 | 菠萝菠萝蜜在线播放 | 国产高清在线一区 | 亚洲精品国久久99热 | 成人av在线一区二区 | 丁香5月婷婷 | 国产成人精品一区二区三区网站观看 | 在线观看国产v片 | 综合在线色 | 天天操天天射天天添 | www.久草.com | 99国产在线| 国产精品99久久免费观看 | 久久久福利| 国产免费午夜 | 69精品| 超碰午夜| 国偷自产中文字幕亚洲手机在线 | 欧美极品少妇xxxx | 国产成人在线免费观看 | 国产无限资源在线观看 | 粉嫩av一区二区三区四区 | 国产精品69久久久久 | 69久久99精品久久久久婷婷 | 玖草影院 | av在线观 | 欧美极品xxxxx |