日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人工智能 > pytorch >内容正文

pytorch

深度学习——3D Fully Convolutional Network for Vehicle Detection in Point Cloud模型实现

發布時間:2024/3/7 pytorch 102 豆豆
生活随笔 收集整理的這篇文章主要介紹了 深度学习——3D Fully Convolutional Network for Vehicle Detection in Point Cloud模型实现 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

1. 參考文獻

3D Fully Convolutional Network for Vehicle Detection in Point Cloud

2. 模型實現

''' Baidu Inc. Ref: 3D Fully Convolutional Network for Vehicle Detection in Point CloudAuthor: HSW Date: 2018-05-02 '''import sys import numpy as np import tensorflow as tf from prepare_data2 import * from baidu_cnn_3d import * KITTI_TRAIN_DATA_CNT = 7481 KITTI_TEST_DATA_CNT = 7518# create 3D-CNN Model def create_graph(sess, modelType = 0, voxel_shape = (400, 400, 20), activation=tf.nn.relu, is_train = True): '''Inputs: sess: tensorflow Session Object voxel_shape: voxel shape for network first layer activation: phrase_train: Outputs: voxel, graph, sess '''voxel = tf.placeholder(tf.float32, [None, voxel_shape[0], voxel_shape[1], voxel_shape[2], 1])phase_train = tf.placeholder(tf.bool, name="phase_train") if is_train else None with tf.variable_scope("3D_CNN_Model") as scope: model = Full_CNN_3D_Model()model.cnn3d_graph(voxel, modelType = modelType, activation=activation, phase_train = is_train)if is_train: initialized_var = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope="3D_CNN_model")sess.run(tf.variables_initializer(initialized_var))return voxel, model, phase_train# read batch data def read_batch_data(batch_size, data_set_dir,objectType = "Car", split = "training", resolution=(0.2, 0.2, 0.2), scale=0.25, limitX = (0,80), limitY=(-40,40), limitZ=(-2.5,1.5)): '''Inputs: batch_size: data_set_dir: objectType: default is "Car"split: default is "training"resolution: scale: outputSize / inputSize limitX: limitY: limitZ: Outputs: '''kitti_3DVoxel = kitti_3DVoxel_interface(data_set_dir, objectType = objectType, split=split, scale = scale, resolution = resolution, limitX = limitX, limitY = limitY, limitZ = limitZ)TRAIN_PROCESSED_IDX = 0TEST_PROCESSED_IDX = 0if split == "training": while TRAIN_PROCESSED_IDX < KITTI_TRAIN_DATA_CNT: batch_voxel = []batch_g_obj = []batch_g_cord = []idx = 0 while idx < batch_size and TRAIN_PROCESSED_IDX < KITTI_TRAIN_DATA_CNT: print(TRAIN_PROCESSED_IDX)voxel, g_obj, g_cord = kitti_3DVoxel.read_kitti_data(TRAIN_PROCESSED_IDX)TRAIN_PROCESSED_IDX += 1if voxel is None:continueidx += 1 # print(voxel.shape)batch_voxel.append(voxel)batch_g_obj.append(g_obj)batch_g_cord.append(g_cord)yield np.array(batch_voxel, dtype=np.float32)[:, :, :, :, np.newaxis], np.array(batch_g_obj, dtype=np.float32), np.array(batch_g_cord, dtype=np.float32)elif split == "testing": while TEST_PROCESSED_IDX < KITTI_TEST_DATA_CNT: batch_voxel = []idx = 0while idx < batch_size and TEST_PROCESSED_IDX < KITTI_TEST_DATA_CNT: voxel = kitti_3DVoxel.read_kitti_data(iter * batch_size + idx)TEST_PROCESSED_IDX += 1if voxel is None: continueidx += 1 batch_voxel.append(voxel)yield np.array(batch_voxel, dtype=np.float32)[:, :, :, :, np.newaxis]# train 3D-CNN Model def train(batch_num, data_set_dir, modelType = 0, objectType = "Car", resolution=(0.2,0.2,0.2), scale = 0.25, lr=0.01, limitX=(0,80), limitY=(-40,40), limitZ=(-2.5,1.5), epoch=101): '''Inputs: batch_num: data_set_dir: modelType: objectType: resolution: scale: lr: limitX, limitY, limitZ: Outputs: None'''batch_size = batch_numtraining_epochs = epochsizeX = int(round((limitX[1] - limitX[0]) / resolution[0]))sizeY = int(round((limitY[1] - limitY[0]) / resolution[0]))sizeZ = int(round((limitZ[1] - limitZ[0]) / resolution[0]))voxel_shape = (sizeX, sizeY, sizeZ)with tf.Session() as sess: voxel, model, phase_train = create_graph(sess, modelType = modelType, voxel_shape = voxel_shape, activation=tf.nn.relu, is_train = True)saver = tf.train.Saver()total_loss, obj_loss, cord_loss, is_obj_loss, non_obj_loss, g_obj, g_cord, y_pred = model.loss_Fun(lossType = 0, cord_loss_weight = 0.02)optimizer = model.create_optimizer(total_loss, optType = "Adam", learnRate = 0.001)init = tf.global_variables_initializer()sess.run(init)for epoch in range(training_epochs): batchCnt = 0; for (batch_voxel, batch_g_obj, batch_g_cord) in read_batch_data(batch_size, data_set_dir, objectType = objectType, split = "training", resolution = resolution, scale = scale, limitX = limitX, limitY = limitY, limitZ = limitZ): # print("batch_g_obj")# print(batch_g_obj.shape)sess.run(optimizer, feed_dict={voxel: batch_voxel, g_obj: batch_g_obj, g_cord: batch_g_cord, phase_train: True})cord_cost = sess.run(cord_loss, feed_dict={voxel: batch_voxel, g_obj: batch_g_obj, g_cord: batch_g_cord, phase_train: True})obj_cost = sess.run(is_obj_loss, feed_dict={voxel: batch_voxel, g_obj: batch_g_obj, g_cord: batch_g_cord, phase_train: True})non_obj_cost = sess.run(non_obj_loss, feed_dict={voxel: batch_voxel, g_obj: batch_g_obj, g_cord: batch_g_cord, phase_train: True})print("Epoch: ", (epoch + 1), ",", "BatchNum: ", (batchCnt + 1), "," , "cord_cost = ", "{:.9f}".format(cord_cost))print("Epoch: ", (epoch + 1), ",", "BatchNum: ", (batchCnt + 1), "," , "obj_cost = ", "{:.9f}".format(obj_cost))print("Epoch: ", (epoch + 1), ",", "BatchNum: ", (batchCnt + 1), "," , "non_obj_cost = ", "{:.9f}".format(non_obj_cost))batchCnt += 1if (epoch > 0) and (epoch % 10 == 0): saver.save(sess, "velodyne_kitti_train_" + str(epoch) + ".ckpt")print("Training Finishied !")# test 3D-CNN Model def test(batch_num, data_set_dir, modelType = 0, objectType = "Car", resolution=(0.2, 0.2, 0.2), scale = 0.25, limitX = (0, 80), limitY = (-40, 40), limitZ=(-2.5, 1.5)): '''Inputs: batch_num: data_set_dir: resolution: scale:limitX, limitY, limitZ: Outputs: None '''sizeX = int(round((limitX[1] - limitX[0]) / resolution[0]))sizeY = int(round((limitY[1] - limitY[0]) / resolution[0]))sizeZ = int(round((limitZ[1] - limitZ[0]) / resolution[0]))voxel_shape = (sizeX, sizeY, sizeZ)batch_size = batch_num; batch_voxel = read_batch_data(batch_num, data_set_dir, objectType = objectType, split="Testing", resolution=resolution, scale=scale, limitX=limitX, limitY=limitY, limitZ=limitZ)batch_voxel_x = batch_voxel.reshape(1, batch_voxel.shape[0], batch_voxel.shape[1], batch_voxel.shape[2], 1)with tf.Session() as sess: is_train = Falsevoxel, model, phase_train = create_graph(sess, modelType = modelType, voxel_shape = voxel_shape, activation=tf.nn.relu, is_train = False)new_saver = tf.train.import_meta_graph("velodyne_kitti_train_40.ckpt.meta")last_model = "./velodyne_kitti_train_40.ckpt"saver.restore(sess, last_model)objectness = model.objectnesscordinate = model.cordinatey_pred = model.y objectness = sess.run(objectness, feed_dict={voxel: batch_voxel_x})[0, :, :, :, 0]cordinate = sess.run(cordinate, feed_dict={voxel:batch_voxel_x})[0]y_pred = sess.run(y_pred, feed_dict={voxel: batch_voxel_x})[0, :, :, :, 0]idx = np.where(y_pred >= 0.995)spheres = np.vstack((index[0], np.vstack((index[1], index[2])))).transpose()centers = spheres_to_centers(spheres, scale = scale, resolution=resolution, limitX = limitX, limitY = limitY, limitZ = limitZ)corners = cordinate[idx].reshape[-1, 8, 3] + centers[:, np.newaxis]print(centers)print(corners)if __name__ == "__main__":batch_num = 3data_set_dir = "/home/hsw/桌面/PCL_API_Doc/frustum-pointnets-master/dataset"modelType = 1objectType = "Car"resolution = (0.2, 0.2, 0.2)scale = 0.25 lr = 0.001limitX = (0, 80)limitY = (-40, 40)limitZ = (-2.5, 1.5) epoch = 101 train(batch_num, data_set_dir = data_set_dir, modelType = modelType, objectType = objectType, resolution=resolution, scale=scale, lr =lr, limitX = limitX, limitY = limitY, limitZ = limitZ)saver = tf.train.Saver() 2.1 網絡模型
''' Baidu Inc. Ref: 3D Fully Convolutional Network for Vehicle Detection in Point CloudAuthor: HSW Date: 2018-05-02 '''import numpy as np import tensorflow as tf class Full_CNN_3D_Model(object): '''Define Full CNN Model'''def __init__(self): pass; def cnn3d_graph(self, voxel, modelType = 0, activation = tf.nn.relu, phase_train = True): if modelType == 0: # Modefied 3D-CNN, 該網絡結構不可使用,因為降采樣太嚴重(降采樣1/8)導致在預測時會出現較大誤差 self.layer1 = self.conv3d_layer(voxel , 1, 16, 5, 5, 5, [1, 2, 2, 2, 1], name="layer1", activation=activation, phase_train=phase_train)self.layer2 = self.conv3d_layer(self.layer1, 16, 32, 5, 5, 5, [1, 2, 2, 2, 1], name="layer2", activation=activation, phase_train=phase_train)self.layer3 = self.conv3d_layer(self.layer2, 32, 64, 3, 3, 3, [1, 2, 2, 2, 1], name="layer3", activation=activation, phase_train=phase_train)self.layer4 = self.conv3d_layer(self.layer3, 64, 64, 3, 3, 3, [1, 1, 1, 1, 1], name="layer4", activation=activation, phase_train=phase_train)self.objectness = self.conv3D_to_output(self.layer4, 64, 2, 3, 3, 3, [1, 1, 1, 1, 1], name="objectness", activation=None)self.cordinate = self.conv3D_to_output(self.layer4, 64, 24, 3, 3, 3, [1, 1, 1, 1, 1], name="cordinate", activation=None)self.y = tf.nn.softmax(self.objectness, dim=-1)elif modelType == 1: # 3D-CNN(論文網絡結構: 降采樣1/4,即InputSize / OutputSize = 0.25)self.layer1 = self.conv3d_layer(voxel , 1, 10, 5, 5, 5, [1, 2, 2, 2, 1], name="layer1", activation=activation, phase_train=phase_train)self.layer2 = self.conv3d_layer(self.layer1, 10, 20, 5, 5, 5, [1, 2, 2, 2, 1], name="layer2", activation=activation, phase_train=phase_train)self.layer3 = self.conv3d_layer(self.layer2, 20, 30, 3, 3, 3, [1, 2, 2, 2, 1], name="layer3", activation=activation, phase_train=phase_train)base_shape = self.layer2.get_shape().as_list()obj_output_shape = [tf.shape(self.layer3)[0], base_shape[1], base_shape[2], base_shape[3], 2]cord_output_shape = [tf.shape(self.layer3)[0], base_shape[1], base_shape[2], base_shape[3], 24]self.objectness = self.deconv3D_to_output(self.layer3, 30, 2, 3, 3, 3, [1, 2, 2, 2, 1], obj_output_shape, name="objectness", activation=None)self.cordinate = self.deconv3D_to_output(self.layer3, 30, 24, 3, 3, 3, [1, 2, 2, 2, 1], cord_output_shape, name="cordinate", activation=None)self.y = tf.nn.softmax(self.objectness, dim=-1)# batch Normalize def batch_norm(self, inputs, phase_train = True, decay = 0.9, eps = 1e-5): '''Inputs: inputs: input data for last layer phase_train: True / False, = True is train, = False is Test Outputs: norm data for next layer '''gamma = tf.get_variable("gamma", shape=inputs.get_shape()[-1], dtype=tf.float32, initializer=tf.constant_initializer(1.0))beta = tf.get_variable("beta", shape=inputs.get_shape()[-1], dtype=tf.float32, initializer=tf.constant_initializer(0.0))pop_mean = tf.get_variable("pop_mean", trainable=False, shape=inputs.get_shape()[-1], dtype=tf.float32, initializer=tf.constant_initializer(0.0))pop_var = tf.get_variable("pop_var", trainable=False, shape=inputs.get_shape()[-1], dtype=tf.float32, initializer=tf.constant_initializer(1.0))axes = range(len(inputs.get_shape()) - 1)if phase_train == True:batch_mean, batch_var = tf.nn.moments(inputs, axes = [0, 1, 2, 3])train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean*(1 - decay))train_var = tf.assign(pop_var, pop_var * decay + batch_var * (1 - decay))with tf.control_dependencies([train_mean, train_var]):return tf.nn.batch_normalization(inputs, batch_mean, batch_var, beta, gamma, eps)else: return tf.nn.batch_normalization(inputs, pop_mean, pop_var, beta, gamma, eps)# 3D Conv Layer def conv3d_layer(self, inputs, inputs_dims, outputs_dims, height, width, length, stride, activation=tf.nn.relu, padding="SAME", name="", phase_train = True): '''Inputs: inputs: pre-Layer output inputs_dims: pre-Layer output channels outputs_dims: cur-Layer output channels [length, height, width]: cur-Layer conv3d kernel size stride: conv3d kernel move step in length/height/width axisactivation: default use relu activation function padding: conv3d 'padding' parameter Outputs: 3D Conv. Layer outputs '''with tf.variable_scope("conv3D" + name): # conv3d layer kernel kernel = tf.get_variable("weights", shape=[length, height, width, inputs_dims, outputs_dims], dtype = tf.float32, initializer=tf.truncated_normal_initializer(stddev=0.01))# conv3d layer bias bias = tf.get_variable("bias", shape=[outputs_dims], dtype=tf.float32, initializer=tf.constant_initializer(0.0))# conv3d conv = tf.nn.conv3d(inputs, kernel, stride, padding=padding)bias = tf.nn.bias_add(conv, bias)if activation:bias = activation(bias, name="activation")bias = self.batch_norm(bias, phase_train)return bias# 3D Conv to Classification Layer def conv3D_to_output(self, inputs, inputs_dims, outputs_dims, height, width, length, stride, activation=tf.nn.relu, padding="SAME", name="", phase_train = True): '''Inputs: inputs: pre-Layer outputs inputs_dims: pre-Layer output channels outputs_dims: cur-Layer output channels stride: conv3d kernel move step in length/height/width axisactivation: default use relu activation function padding: conv3d 'padding' parameter outputs_shape: de-conv outputs shape Outputs: conv outputs '''with tf.variable_scope("conv3D" + name):kernel = tf.get_variable("weights", shape=[length, height, width, inputs_dims, outputs_dims], dtype=tf.float32, initializer=tf.constant_initializer(0.01))conv = tf.nn.conv3d(inputs, kernel, stride, padding=padding)return conv # 3D Deconv. to Classification Layer def deconv3D_to_output(self, inputs, inputs_dims, outputs_dims, height, width, length, stride, output_shape, activation=tf.nn.relu, padding="SAME", name="", phase_train = True): '''Inputs: inputs: pre-Layer outputs inputs_dims: pre-Layer output channels outputs_dims: cur-Layer output channels stride: conv3d kernel move step in length/height/width axisactivation: default use relu activation function padding: conv3d 'padding' parameter outputs_shape: de-conv outputs shape Outputs: de-conv outputs '''with tf.variable_scope("deconv3D"+name):kernel = tf.get_variable("weights", shape=[length, height, width, outputs_dims, inputs_dims], dtype=tf.float32, initializer=tf.constant_initializer(0.01))deconv = tf.nn.conv3d_transpose(inputs, kernel, output_shape, stride, padding="SAME")return deconv # define loss def loss_Fun(self, lossType = 0, cord_loss_weight = 0.02): '''Inputs: lossType: = for difference loss Type cord_loss_weight: 0.02 Outputs: '''if lossType == 0: # print("g_obj")# print(self.cordinate.get_shape())g_obj = tf.placeholder(tf.float32, self.cordinate.get_shape().as_list()[:4])g_cord = tf.placeholder(tf.float32, self.cordinate.get_shape().as_list())non_g_obj = tf.subtract(tf.ones_like(g_obj, dtype=tf.float32), g_obj )elosion = 0.00001y = self.y is_obj_loss = -tf.reduce_sum(tf.multiply(g_obj , tf.log(y[:,:,:,:,0] + elosion))) # object loss non_obj_loss = tf.reduce_sum(tf.multiply(non_g_obj, tf.log(y[:, :, :, :, 0] + elosion))) # non-object loss cross_entropy = tf.add(is_obj_loss, non_obj_loss)obj_loss = cross_entropycord_diff = tf.multiply(g_obj , tf.reduce_sum(tf.square(tf.subtract(self.cordinate, g_cord)), 4)) # cord loss cord_loss = tf.multiply(tf.reduce_sum(cord_diff), cord_loss_weight)return tf.add(obj_loss, cord_loss), obj_loss, cord_loss, is_obj_loss, non_obj_loss, g_obj, g_cord, y # Create Optimizer def create_optimizer(self, all_loss, optType = "Adam", learnRate = 0.001): '''Inputs: all_loss: graph all_loss lr: learn rate Outputs: optimizer '''if optType == "Adam": opt = tf.train.AdamOptimizer(learnRate)optimizer = opt.minimize(all_loss)return optimizer

2.2? 數據預處理

'''Prepase KITTI data for 3D Object detection Ref: 3D Fully Convolutional Network for Vehicle Detection in Point CloudAuthor: Shiwen He Date: 28 April 2018 '''import numpy as np from kitti_object import kitti_object as kittiReader import kitti_util # lidar data => 3D Grid Voxel # filter lidar data by camera FoV def filter_camera_fov(pc): '''Inputs: pc: n x 3 Outputs: filter_pc: m x 3, m <= 3 Notices: FoV: from -45 degree to 45 degree '''logic_fov = np.logical_and((pc[:, 1] < pc[:, 0] - 0.27), (-pc[:, 1] < pc[:, 0] - 0.27))filter_pc = pc[logic_fov]return filter_pc # filter lidar data by detection range def filter_lidar_range(pc, limitX, limitY, limitZ):''' Inputs: pc: n x 3, limitX, limitY, limitZ: 1 x 2Outputs: filter_pc: m x 3, m <= n '''logic_x = np.logical_and(pc[:, 0] >= limitX[0], pc[:, 0] < limitX[1])logic_y = np.logical_and(pc[:, 1] >= limitY[0], pc[:, 1] < limitY[1])logic_z = np.logical_and(pc[:, 2] >= limitZ[0], pc[:, 2] < limitZ[1])logic_xyz = np.logical_and(logic_x, np.logical_and(logic_y, logic_z))filter_pc = pc[:, :3][logic_xyz]return filter_pc# filter center + corners def filter_center_corners(centers, corners, boxsizes, limitX, limitY, limitZ): '''Inputs: centers: n x 3 corners: n x 8 x 3 limitX, limitY, limitZ: 1 x 2Outputs: filter_centers: m x 3, m <= n filter_corners: m x 3, m <= n '''logic_x = np.logical_and(centers[:, 0] >= limitX[0], centers[:, 0] < limitX[1])logic_y = np.logical_and(centers[:, 1] >= limitY[0], centers[:, 1] < limitY[1])logic_z = np.logical_and(centers[:, 2] >= limitZ[0], centers[:, 2] < limitZ[1])logic_xyz = np.logical_and(logic_x, np.logical_and(logic_y, logic_z))filter_centers_1 = centers[logic_xyz, :]filter_corners_1 = corners[logic_xyz, :, :]filter_boxsizes_1 = boxsizes[logic_xyz, :]shape_centers = filter_centers_1.shape; filter_centers = np.zeros([shape_centers[0], 3])filter_corners = np.zeros([shape_centers[0], 8, 3]); filter_boxsizes = np.zeros([shape_centers[0], 3]); idx = 0for idx2 in range(shape_centers[0]): logic_x = np.logical_and(filter_corners_1[idx2, :, 0] >= limitX[0], filter_corners_1[idx2, :, 0] < limitX[1])logic_y = np.logical_and(filter_corners_1[idx2, :, 1] >= limitY[0], filter_corners_1[idx2, :, 1] < limitY[1])logic_z = np.logical_and(filter_corners_1[idx2, :, 2] >= limitZ[0], filter_corners_1[idx2, :, 2] < limitZ[1])logic_xyz = np.logical_and(logic_x, np.logical_and(logic_y, logic_z))if logic_xyz.all(): filter_centers[idx, :3] = filter_centers_1[idx2, :]filter_corners[idx, :8, :3] = filter_corners_1[idx2, :, :] filter_boxsizes[idx, :3] = filter_boxsizes_1[idx2, :]idx += 1 if idx > 0:return filter_centers[:idx, :], filter_corners[:idx, :, :], filter_boxsizes[:idx, :]else:return None, None, Nonedef filter_label(object3Ds, objectType = 'Car'): '''Inputs: object3Ds:objectType: Outputs: centers, corners, rotatey '''idx = 0data = np.zeros([50, 7]).astype(np.float32)for iter in object3Ds: if iter.type == "DontCare": continue;if iter.type == objectType: # position data[idx, 0] = iter.t[0]data[idx, 1] = iter.t[1]data[idx, 2] = iter.t[2]# size data[idx, 3] = iter.hdata[idx, 4] = iter.wdata[idx, 5] = iter.l # rotate data[idx, 6] = iter.ryidx += 1 if idx > 0:return data[:idx, :3], data[:idx, 3:6], data[:idx, 6]else:return None, None, Nonedef proj_to_velo(calib_data):"""Inputs: calib_data: Outputs: project matrix: from camera cordination to velodyne cordination"""rect = calib_data.R0; # calib_data["R0_rect"].reshape(3, 3)velo_to_cam = calib_data.V2C; # calib_data["Tr_velo_to_cam"].reshape(3, 4)inv_rect = np.linalg.inv(rect)inv_velo_to_cam = np.linalg.pinv(velo_to_cam[:, :3])return np.dot(inv_velo_to_cam, inv_rect)# corners_3d def compute_3d_corners(centers, sizes, rotates): ''' Inputs: centers: rotates: sizes: Outputs: corners_3d: n x 8 x 3 array in Lidar coord.'''# print(centers) corners = []for place, rotate, sz in zip(centers, rotates, sizes):x, y, z = placeh, w, l = szif l > 10:continuecorner = np.array([[x - l / 2., y - w / 2., z],[x + l / 2., y - w / 2., z],[x - l / 2., y + w / 2., z],[x - l / 2., y - w / 2., z + h],[x - l / 2., y + w / 2., z + h],[x + l / 2., y + w / 2., z],[x + l / 2., y - w / 2., z + h],[x + l / 2., y + w / 2., z + h],])corner -= np.array([x, y, z])rotate_matrix = np.array([[np.cos(rotate), -np.sin(rotate), 0],[np.sin(rotate), np.cos(rotate), 0],[0, 0, 1]])a = np.dot(corner, rotate_matrix.transpose())a += np.array([x, y, z])corners.append(a)corners_3d = np.array(corners) return corners_3d# lidar data to 3D Grid Voxel def lidar_to_binary_voxel(pc, resolution, limitX, limitY, limitZ):''' Inputs: pc: n x 3, resolution: 1 x 3, limitX, limitY, limitZ: 1 x 2Outputs: voxel: shape is inputSize '''voxel_pc = np.zeros_like(pc).astype(np.int32)# Compute PointCloud Position in 3D Grid voxel_pc[:, 0] = ((pc[:, 0] - limitX[0]) / resolution[0]).astype(np.int32)voxel_pc[:, 1] = ((pc[:, 1] - limitY[0]) / resolution[1]).astype(np.int32)voxel_pc[:, 2] = ((pc[:, 2] - limitZ[0]) / resolution[2]).astype(np.int32)# 3D Grid voxel = np.zeros((int(round(limitX[1] - limitX[0]) / resolution[0]), int(round(limitY[1] - limitY[0]) / resolution[1]), \int(round((limitZ[1] - limitZ[0]) / resolution[2])))) # 3D Grid Value voxel[voxel_pc[:, 0], voxel_pc[:, 1], voxel_pc[:, 2]] = 1return voxel # label center to 3D Grid Voxel Center(sphere) def center_to_sphere(centers, boxsize, scale, resolution, limitX, limitY, limitZ):''' Inputs: center: n x 3 boxsize: n x 3scale: 1 x 1, = outputSize / inputSizeresolution: 1 x 3limitX, limitY, limitZ: 1 x 2Outputs: spheres: m x 3, m <= n '''# from 3D Box's bottom center => 3D center move_center = centers.copy(); print("centers")print(centers)print("boxsize")print(boxsize)move_center[:, 2] = centers[:, 2] + boxsize[:, 0] / 2; # compute Label Center PointCloud Position in 3D Grid spheres = np.zeros_like(move_center).astype(np.int32)spheres[:, 0] = ((move_center[:, 0] - limitX[0]) / resolution[0] * scale).astype(np.int32)spheres[:, 1] = ((move_center[:, 1] - limitY[0]) / resolution[1] * scale).astype(np.int32)spheres[:, 2] = ((move_center[:, 2] - limitZ[0]) / resolution[2] * scale).astype(np.int32)print("move_center")print(move_center)print("spheres")print(spheres)return spheres# 3D Grid Voxel Center(sphere) to label center def sphere_to_center(spheres, scale, resolution, limitX, limitY, limitZ): '''Inputs: spheres: n x 3 scale: 1 x 1, = outputSize / inputSize resolution: 1 x 3limitX, limitY, limitZ: 1 x 2 Outputs: centers: m x 3, m <= 3 '''centers = np.zeros_like(spheres).astype(np.float32); centers[:, 0] = spheres[:, 0] * resolution[0] / scale + limitX[0]centers[:, 1] = spheres[:, 1] * resolution[1] / scale + limitY[0]centers[:, 2] = spheres[:, 2] * resolution[2] / scale + limitZ[0]return centers# label corners to 3D Grid Voxel: corners - centers def corners_to_train(spheres, corners, scale, resolution, limitX, limitY, limitZ):'''Inputs: spheres: n x 3corners: n x 8 x 3 scale: 1 x 1, = outputSize / inputSizeresolution: 1 x 3limitX, limitY, limitZ: 1 x 2 Outputs: train_corners: m x 3, m <= n '''# 3D Grid Voxel Center => label center centers = sphere_to_center(spheres, scale, resolution, limitX, limitY, limitZ)train_corners = np.zeros_like(corners).astype(np.float32)# train corners for regression loss for index, (corner, center) in enumerate(zip(corners, centers)):train_corners[index] = corner - center return train_corners# create center and cordination for train def create_train_label(centers, corners, boxsize, scale, resolution, limitX, limitY, limitZ):'''Inputs: centers: n x 3 corners: n x 8 x 3 boxsize: n x 3 scale: 1 x 1, outputSize / inputSizeresolution: 1 x 3 limitX. limitY, limitZ: 1 x 2 Outputs: train_centers: m x 3, m <= n train_corners: m x 3, m <= n '''train_centers = center_to_sphere(centers, boxsize, scale, resolution, limitX, limitY, limitZ)train_corners = corners_to_train(train_centers, corners, scale, resolution, limitX, limitY, limitZ)return train_centers, train_corners def create_obj_map(train_centers, scale, resolution, limitX, limitY, limitZ):'''Inputs: centers: n x 3 scale: 1 x 1, outputSize / inputSizeresolution: 1 x 3limitX, limitY, limitZ: 1 x 2Outputs: obj_map: shape is scale * inputSize '''# 3D Grid sizeX = int(round((limitX[1] - limitX[0]) / resolution[0] * scale))sizeY = int(round((limitY[1] - limitY[0]) / resolution[1] * scale))sizeZ = int(round((limitZ[1] - limitZ[0]) / resolution[2] * scale))obj_map = np.zeros([sizeX, sizeY, sizeZ]) # print("sizeX, sizeY, sizeZ")# print(sizeX, sizeY, sizeZ)# objectness map: label center in objectness map where value is 1 obj_map[train_centers[:,0], train_centers[:, 1], train_centers[:, 2]] = 1; return obj_map def create_cord_map(train_centers, train_corners, scale, resolution, limitX, limitY, limitZ):'''Inputs: train_centers: n x 3 train_corners: n x 8 x 3 scale: 1 x 1, outputSize / inputSizeresolution: 1 x 3 limitX, limitY, limitZ: 1 x 2Outputs: cord_map: shape is inputSize * scale ''' # reshape train_corners: n x 8 x 3 => n x 24 corners = train_corners.reshape(train_corners.shape[0], -1) # 3D Grid sizeX = int(round((limitX[1] - limitX[0]) / resolution[0] * scale))sizeY = int(round((limitY[1] - limitY[0]) / resolution[1] * scale))sizeZ = int(round((limitZ[1] - limitZ[0]) / resolution[2] * scale))sizeD = 24cord_map = np.zeros([sizeX, sizeY, sizeZ, sizeD]) # print(train_centers)cord_map[train_centers[:,0], train_centers[:, 1], train_centers[:, 2]] = cornersreturn cord_map # kitti data interface: class kitti_3DVoxel_interface(object): def __init__(self, root_dir, objectType = 'Car', split='training', scale = 0.25, resolution = (0.2, 0.2, 0.2), limitX = (0, 80), limitY = (-40, 40), limitZ = (-2.5, 1.5)):'''Inputs: case1 root_dir: train or val. data dir, train or val.'s file struct like: root_dir->training->velodyneroot_dir->training->calibroot_dir->training->label_2 case2 root_dir: test data dir, test's file struct like: root_dir->testing->velodyneroot_dir->testing->calib Outputs: -None '''self.root_dir = root_dirself.split = splitself.object = kittiReader(self.root_dir, self.split)self.objectType = objectTypeself.scale = scaleself.resolution = resolution self.limitX = limitXself.limitY = limitYself.limitZ = limitZdef read_kitti_data(self, idx = 0): '''Inputs:idx: training or testing sample indexOutputs:voxel : inputSizeobj_map : scale * inputSizecord_map : scale * inputSize'''kitti_Object3Ds = Nonekitti_Lidar = None kitti_Calib = Noneif self.split == 'training':# read Lidar data + Lidar Label + Calib data kitti_Object3Ds = self.object.get_label_objects(idx); kitti_Lidar = self.object.get_lidar(idx); kitti_Calib = self.object.get_calibration(idx); # lidar data filter filter_fov = filter_camera_fov(kitti_Lidar) filter_range = filter_lidar_range(filter_fov, self.limitX, self.limitY, self.limitZ)# label filter centers, boxsizes, rotates = filter_label(kitti_Object3Ds, self.objectType)if centers is None:return None, None, None # label center: Notice from camera Coordination to velo. Coordination if not(kitti_Calib is None): proj_velo = proj_to_velo(kitti_Calib)[:, :3]centers = np.dot(centers, proj_velo.transpose())[:, :3] # label corners: corners = compute_3d_corners(centers, boxsizes, rotates)# print(corners)# print(corners.shape)# filter centers + corners filter_centers, filter_corners, boxsizes = filter_center_corners(centers, corners, boxsizes, self.limitX, self.limitY, self.limitZ)# print(filter_centers)# print(filter_corners)if not(filter_centers is None): # training centertrain_centers, train_corners = create_train_label(filter_centers, filter_corners, boxsizes, self.scale, self.resolution, self.limitX, self.limitY, self.limitZ)# print("filter_centers")# print(filter_centers)# print("train_centers")# print(train_centers)# obj_map / cord_map / voxel obj_map = create_obj_map(train_centers, self.scale, self.resolution, self.limitX, self.limitY, self.limitZ)cord_map = create_cord_map(train_centers, train_corners, self.scale, self.resolution, self.limitX, self.limitY, self.limitZ)voxel = lidar_to_binary_voxel(filter_range, self.resolution, self.limitX, self.limitY, self.limitZ)return voxel, obj_map, cord_mapelse: return None, None, None elif self.split == 'testing':# read Lidar Data + Calib + Data kitti_Lidar = self.object.get_lidar(idx); kitti_Calib = self.object.get_calibration(idx); # lidar data filter filter_fov = filter_camera_fov(kitti_Lidar) filter_range = filter_lidar_range(filter_fov, self.limitX, self.limitY, self.limitZ)voxel = lidar_to_binary_voxel(filter_range, self.resolution, self.limitX, self.limitY, self.limitZ)return voxelif __name__ == '__main__':data_dir = "/home/hsw/桌面/PCL_API_Doc/frustum-pointnets-master/dataset"kitti_3DVoxel = kitti_3DVoxel_interface(data_dir, objectType = 'Car', split='training', scale = 0.25, resolution = (0.2, 0.2, 0.2), limitX = (0, 80), limitY = (-40, 40), limitZ = (-2.5, 1.5))sampleIdx = 195; voxel, obj_map, cord_map = kitti_3DVoxel.read_kitti_data(sampleIdx)if not(voxel is None): print(voxel.shape)print(obj_map.shape) print(cord_map.shape)

2.3 KITTI數據讀取相關

''' Helper class and functions for loading KITTI objectsAuthor: Charles R. Qi Date: September 2017 ''' from __future__ import print_functionimport os import sys import numpy as np import cv2 from PIL import Image BASE_DIR = os.path.dirname(os.path.abspath(__file__)) ROOT_DIR = os.path.dirname(BASE_DIR) sys.path.append(os.path.join(ROOT_DIR, 'mayavi')) import kitti_util as utilstry:raw_input # Python 2 except NameError:raw_input = input # Python 3# 3D static data class kitti_object(object):'''Load and parse object data into a usable format.'''def __init__(self, root_dir, split='training'):'''root_dir contains training and testing folders'''self.root_dir = root_dirself.split = splitself.split_dir = os.path.join(root_dir, split)if split == 'training':self.num_samples = 7481elif split == 'testing':self.num_samples = 7518else:print('Unknown split: %s' % (split))exit(-1)# data dir self.image_dir = os.path.join(self.split_dir, 'image_2')self.calib_dir = os.path.join(self.split_dir, 'calib')self.lidar_dir = os.path.join(self.split_dir, 'velodyne')self.label_dir = os.path.join(self.split_dir, 'label_2')def __len__(self):return self.num_samples# read image: return image def get_image(self, idx):assert(idx<self.num_samples) img_filename = os.path.join(self.image_dir, '%06d.png'%(idx))return utils.load_image(img_filename)# read lidar: return n x 4 def get_lidar(self, idx): assert(idx<self.num_samples) lidar_filename = os.path.join(self.lidar_dir, '%06d.bin'%(idx))return utils.load_velo_scan(lidar_filename)# read calib file: def get_calibration(self, idx):assert(idx<self.num_samples) calib_filename = os.path.join(self.calib_dir, '%06d.txt'%(idx))return utils.Calibration(calib_filename)# read label def get_label_objects(self, idx):assert(idx<self.num_samples and self.split=='training') label_filename = os.path.join(self.label_dir, '%06d.txt'%(idx))return utils.read_label(label_filename)# read depth map def get_depth_map(self, idx):pass# read top_down image def get_top_down(self, idx):passclass kitti_object_video(object):''' Load data for KITTI videos '''def __init__(self, img_dir, lidar_dir, calib_dir):self.calib = utils.Calibration(calib_dir, from_video=True)self.img_dir = img_dirself.lidar_dir = lidar_dirself.img_filenames = sorted([os.path.join(img_dir, filename) \for filename in os.listdir(img_dir)])self.lidar_filenames = sorted([os.path.join(lidar_dir, filename) \for filename in os.listdir(lidar_dir)])print(len(self.img_filenames))print(len(self.lidar_filenames))#assert(len(self.img_filenames) == len(self.lidar_filenames))self.num_samples = len(self.img_filenames)def __len__(self):return self.num_samplesdef get_image(self, idx):assert(idx<self.num_samples) img_filename = self.img_filenames[idx]return utils.load_image(img_filename)def get_lidar(self, idx): assert(idx<self.num_samples) lidar_filename = self.lidar_filenames[idx]return utils.load_velo_scan(lidar_filename)def get_calibration(self, unused):return self.calibdef viz_kitti_video():video_path = os.path.join(ROOT_DIR, 'dataset/2011_09_26/')dataset = kitti_object_video(\os.path.join(video_path, '2011_09_26_drive_0023_sync/image_02/data'),os.path.join(video_path, '2011_09_26_drive_0023_sync/velodyne_points/data'),video_path)print(len(dataset))for i in range(len(dataset)):img = dataset.get_image(0)pc = dataset.get_lidar(0)Image.fromarray(img).show()draw_lidar(pc)raw_input()pc[:,0:3] = dataset.get_calibration().project_velo_to_rect(pc[:,0:3])draw_lidar(pc)raw_input()returndef show_image_with_boxes(img, objects, calib, show3d=True):''' Show image with 2D bounding boxes '''img1 = np.copy(img) # for 2d bboximg2 = np.copy(img) # for 3d bboxfor obj in objects:if obj.type=='DontCare':continuecv2.rectangle(img1, (int(obj.xmin),int(obj.ymin)),(int(obj.xmax),int(obj.ymax)), (0,255,0), 2)box3d_pts_2d, box3d_pts_3d = utils.compute_box_3d(obj, calib.P)img2 = utils.draw_projected_box3d(img2, box3d_pts_2d)Image.fromarray(img1).show()if show3d:Image.fromarray(img2).show()def get_lidar_in_image_fov(pc_velo, calib, xmin, ymin, xmax, ymax,return_more=False, clip_distance=2.0):''' Filter lidar points, keep those in image FOV '''pts_2d = calib.project_velo_to_image(pc_velo)fov_inds = (pts_2d[:,0]<xmax) & (pts_2d[:,0]>=xmin) & \(pts_2d[:,1]<ymax) & (pts_2d[:,1]>=ymin)fov_inds = fov_inds & (pc_velo[:,0]>clip_distance)imgfov_pc_velo = pc_velo[fov_inds,:]if return_more:return imgfov_pc_velo, pts_2d, fov_indselse:return imgfov_pc_velodef show_lidar_with_boxes(pc_velo, objects, calib,img_fov=False, img_width=None, img_height=None): ''' Show all LiDAR points.Draw 3d box in LiDAR point cloud (in velo coord system) '''if 'mlab' not in sys.modules: import mayavi.mlab as mlabfrom viz_util import draw_lidar_simple, draw_lidar, draw_gt_boxes3dprint(('All point num: ', pc_velo.shape[0]))fig = mlab.figure(figure=None, bgcolor=(0,0,0),fgcolor=None, engine=None, size=(1000, 500))if img_fov:pc_velo = get_lidar_in_image_fov(pc_velo, calib, 0, 0,img_width, img_height)print(('FOV point num: ', pc_velo.shape[0]))draw_lidar(pc_velo, fig=fig)for obj in objects:if obj.type=='DontCare':continue# Draw 3d bounding boxbox3d_pts_2d, box3d_pts_3d = utils.compute_box_3d(obj, calib.P) box3d_pts_3d_velo = calib.project_rect_to_velo(box3d_pts_3d)# Draw heading arrowori3d_pts_2d, ori3d_pts_3d = utils.compute_orientation_3d(obj, calib.P)ori3d_pts_3d_velo = calib.project_rect_to_velo(ori3d_pts_3d)x1,y1,z1 = ori3d_pts_3d_velo[0,:]x2,y2,z2 = ori3d_pts_3d_velo[1,:]draw_gt_boxes3d([box3d_pts_3d_velo], fig=fig)mlab.plot3d([x1, x2], [y1, y2], [z1,z2], color=(0.5,0.5,0.5),tube_radius=None, line_width=1, figure=fig)mlab.show(1)def show_lidar_on_image(pc_velo, img, calib, img_width, img_height):''' Project LiDAR points to image '''imgfov_pc_velo, pts_2d, fov_inds = get_lidar_in_image_fov(pc_velo,calib, 0, 0, img_width, img_height, True)imgfov_pts_2d = pts_2d[fov_inds,:]imgfov_pc_rect = calib.project_velo_to_rect(imgfov_pc_velo)import matplotlib.pyplot as pltcmap = plt.cm.get_cmap('hsv', 256)cmap = np.array([cmap(i) for i in range(256)])[:,:3]*255for i in range(imgfov_pts_2d.shape[0]):depth = imgfov_pc_rect[i,2]color = cmap[int(640.0/depth),:]cv2.circle(img, (int(np.round(imgfov_pts_2d[i,0])),int(np.round(imgfov_pts_2d[i,1]))),2, color=tuple(color), thickness=-1)Image.fromarray(img).show() return imgdef dataset_viz():dataset = kitti_object(os.path.join(ROOT_DIR, 'dataset/KITTI/object'))for data_idx in range(len(dataset)):# Load data from datasetobjects = dataset.get_label_objects(data_idx)objects[0].print_object()img = dataset.get_image(data_idx)img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img_height, img_width, img_channel = img.shapeprint(('Image shape: ', img.shape))pc_velo = dataset.get_lidar(data_idx)[:,0:3]calib = dataset.get_calibration(data_idx)# Draw 2d and 3d boxes on imageshow_image_with_boxes(img, objects, calib, False)raw_input()# Show all LiDAR points. Draw 3d box in LiDAR point cloudshow_lidar_with_boxes(pc_velo, objects, calib, True, img_width, img_height)raw_input()if __name__=='__main__':import mayavi.mlab as mlabfrom viz_util import draw_lidar_simple, draw_lidar, draw_gt_boxes3ddataset_viz()


""" Helper methods for loading and parsing KITTI data.Author: Charles R. Qi Date: September 2017 """ from __future__ import print_functionimport numpy as np import cv2 import osclass Object3d(object):''' 3d object label '''def __init__(self, label_file_line):data = label_file_line.split(' ')data[1:] = [float(x) for x in data[1:]]# extract label, truncation, occlusionself.type = data[0] # 'Car', 'Pedestrian', ...self.truncation = data[1] # truncated pixel ratio [0..1]self.occlusion = int(data[2]) # 0=visible, 1=partly occluded, 2=fully occluded, 3=unknownself.alpha = data[3] # object observation angle [-pi..pi]# extract 2d bounding box in 0-based coordinatesself.xmin = data[4] # leftself.ymin = data[5] # topself.xmax = data[6] # rightself.ymax = data[7] # bottomself.box2d = np.array([self.xmin,self.ymin,self.xmax,self.ymax])# extract 3d bounding box informationself.h = data[8] # box heightself.w = data[9] # box widthself.l = data[10] # box length (in meters)self.t = (data[11],data[12],data[13]) # location (x,y,z) in camera coord.self.ry = data[14] # yaw angle (around Y-axis in camera coordinates) [-pi..pi]def print_object(self):print('Type, truncation, occlusion, alpha: %s, %d, %d, %f' % \(self.type, self.truncation, self.occlusion, self.alpha))print('2d bbox (x0,y0,x1,y1): %f, %f, %f, %f' % \(self.xmin, self.ymin, self.xmax, self.ymax))print('3d bbox h,w,l: %f, %f, %f' % \(self.h, self.w, self.l))print('3d bbox location, ry: (%f, %f, %f), %f' % \(self.t[0],self.t[1],self.t[2],self.ry))class Calibration(object):''' Calibration matrices and utils3d XYZ in <label>.txt are in rect camera coord.2d box xy are in image2 coordPoints in <lidar>.bin are in Velodyne coord.y_image2 = P^2_rect * x_recty_image2 = P^2_rect * R0_rect * Tr_velo_to_cam * x_velox_ref = Tr_velo_to_cam * x_velox_rect = R0_rect * x_refP^2_rect = [f^2_u, 0, c^2_u, -f^2_u b^2_x;0, f^2_v, c^2_v, -f^2_v b^2_y;0, 0, 1, 0]= K * [1|t]image2 coord:----> x-axis (u)||v y-axis (v)velodyne coord:front x, left y, up zrect/ref camera coord:right x, down y, front zRef (KITTI paper): http://www.cvlibs.net/publications/Geiger2013IJRR.pdfTODO(rqi): do matrix multiplication only once for each projection.'''def __init__(self, calib_filepath, from_video=False):if from_video:calibs = self.read_calib_from_video(calib_filepath)else:calibs = self.read_calib_file(calib_filepath)# Projection matrix from rect camera coord to image2 coordself.P = calibs['P2'] self.P = np.reshape(self.P, [3,4])# Rigid transform from Velodyne coord to reference camera coordself.V2C = calibs['Tr_velo_to_cam']self.V2C = np.reshape(self.V2C, [3,4])self.C2V = inverse_rigid_trans(self.V2C)# Rotation from reference camera coord to rect camera coordself.R0 = calibs['R0_rect']self.R0 = np.reshape(self.R0,[3,3])# Camera intrinsics and extrinsicsself.c_u = self.P[0,2]self.c_v = self.P[1,2]self.f_u = self.P[0,0]self.f_v = self.P[1,1]self.b_x = self.P[0,3]/(-self.f_u) # relative self.b_y = self.P[1,3]/(-self.f_v)def read_calib_file(self, filepath):''' Read in a calibration file and parse into a dictionary.Ref: https://github.com/utiasSTARS/pykitti/blob/master/pykitti/utils.py'''data = {}with open(filepath, 'r') as f:for line in f.readlines():line = line.rstrip()if len(line)==0: continuekey, value = line.split(':', 1)# The only non-float values in these files are dates, which# we don't care about anywaytry:data[key] = np.array([float(x) for x in value.split()])except ValueError:passreturn datadef read_calib_from_video(self, calib_root_dir):''' Read calibration for camera 2 from video calib files.there are calib_cam_to_cam and calib_velo_to_cam under the calib_root_dir'''data = {}cam2cam = self.read_calib_file(os.path.join(calib_root_dir, 'calib_cam_to_cam.txt'))velo2cam = self.read_calib_file(os.path.join(calib_root_dir, 'calib_velo_to_cam.txt'))Tr_velo_to_cam = np.zeros((3,4))Tr_velo_to_cam[0:3,0:3] = np.reshape(velo2cam['R'], [3,3])Tr_velo_to_cam[:,3] = velo2cam['T']data['Tr_velo_to_cam'] = np.reshape(Tr_velo_to_cam, [12])data['R0_rect'] = cam2cam['R_rect_00']data['P2'] = cam2cam['P_rect_02']return datadef cart2hom(self, pts_3d):''' Input: nx3 points in CartesianOupput: nx4 points in Homogeneous by pending 1'''n = pts_3d.shape[0]pts_3d_hom = np.hstack((pts_3d, np.ones((n,1))))return pts_3d_hom# =========================== # ------- 3d to 3d ---------- # =========================== def project_velo_to_ref(self, pts_3d_velo):pts_3d_velo = self.cart2hom(pts_3d_velo) # nx4return np.dot(pts_3d_velo, np.transpose(self.V2C))def project_ref_to_velo(self, pts_3d_ref):pts_3d_ref = self.cart2hom(pts_3d_ref) # nx4return np.dot(pts_3d_ref, np.transpose(self.C2V))def project_rect_to_ref(self, pts_3d_rect):''' Input and Output are nx3 points '''return np.transpose(np.dot(np.linalg.inv(self.R0), np.transpose(pts_3d_rect)))def project_ref_to_rect(self, pts_3d_ref):''' Input and Output are nx3 points '''return np.transpose(np.dot(self.R0, np.transpose(pts_3d_ref)))def project_rect_to_velo(self, pts_3d_rect):''' Input: nx3 points in rect camera coord.Output: nx3 points in velodyne coord.''' pts_3d_ref = self.project_rect_to_ref(pts_3d_rect)return self.project_ref_to_velo(pts_3d_ref)def project_velo_to_rect(self, pts_3d_velo):pts_3d_ref = self.project_velo_to_ref(pts_3d_velo)return self.project_ref_to_rect(pts_3d_ref)# =========================== # ------- 3d to 2d ---------- # =========================== def project_rect_to_image(self, pts_3d_rect):''' Input: nx3 points in rect camera coord.Output: nx2 points in image2 coord.'''pts_3d_rect = self.cart2hom(pts_3d_rect)pts_2d = np.dot(pts_3d_rect, np.transpose(self.P)) # nx3pts_2d[:,0] /= pts_2d[:,2]pts_2d[:,1] /= pts_2d[:,2]return pts_2d[:,0:2]def project_velo_to_image(self, pts_3d_velo):''' Input: nx3 points in velodyne coord.Output: nx2 points in image2 coord.'''pts_3d_rect = self.project_velo_to_rect(pts_3d_velo)return self.project_rect_to_image(pts_3d_rect)# =========================== # ------- 2d to 3d ---------- # =========================== def project_image_to_rect(self, uv_depth):''' Input: nx3 first two channels are uv, 3rd channelis depth in rect camera coord.Output: nx3 points in rect camera coord.'''n = uv_depth.shape[0]x = ((uv_depth[:,0]-self.c_u)*uv_depth[:,2])/self.f_u + self.b_xy = ((uv_depth[:,1]-self.c_v)*uv_depth[:,2])/self.f_v + self.b_ypts_3d_rect = np.zeros((n,3))pts_3d_rect[:,0] = xpts_3d_rect[:,1] = ypts_3d_rect[:,2] = uv_depth[:,2]return pts_3d_rectdef project_image_to_velo(self, uv_depth):pts_3d_rect = self.project_image_to_rect(uv_depth)return self.project_rect_to_velo(pts_3d_rect)def rotx(t):''' 3D Rotation about the x-axis. '''c = np.cos(t)s = np.sin(t)return np.array([[1, 0, 0],[0, c, -s],[0, s, c]])def roty(t):''' Rotation about the y-axis. '''c = np.cos(t)s = np.sin(t)return np.array([[c, 0, s],[0, 1, 0],[-s, 0, c]])def rotz(t):''' Rotation about the z-axis. '''c = np.cos(t)s = np.sin(t)return np.array([[c, -s, 0],[s, c, 0],[0, 0, 1]])def transform_from_rot_trans(R, t):''' Transforation matrix from rotation matrix and translation vector. '''R = R.reshape(3, 3)t = t.reshape(3, 1)return np.vstack((np.hstack([R, t]), [0, 0, 0, 1]))def inverse_rigid_trans(Tr):''' Inverse a rigid body transform matrix (3x4 as [R|t])[R'|-R't; 0|1]'''inv_Tr = np.zeros_like(Tr) # 3x4inv_Tr[0:3,0:3] = np.transpose(Tr[0:3,0:3])inv_Tr[0:3,3] = np.dot(-np.transpose(Tr[0:3,0:3]), Tr[0:3,3])return inv_Trdef read_label(label_filename):lines = [line.rstrip() for line in open(label_filename)]objects = [Object3d(line) for line in lines]return objectsdef load_image(img_filename):return cv2.imread(img_filename)def load_velo_scan(velo_filename):scan = np.fromfile(velo_filename, dtype=np.float32)scan = scan.reshape((-1, 4))return scandef project_to_image(pts_3d, P):''' Project 3d points to image plane.Usage: pts_2d = projectToImage(pts_3d, P)input: pts_3d: nx3 matrixP: 3x4 projection matrixoutput: pts_2d: nx2 matrixP(3x4) dot pts_3d_extended(4xn) = projected_pts_2d(3xn)=> normalize projected_pts_2d(2xn)<=> pts_3d_extended(nx4) dot P'(4x3) = projected_pts_2d(nx3)=> normalize projected_pts_2d(nx2)'''n = pts_3d.shape[0]pts_3d_extend = np.hstack((pts_3d, np.ones((n,1))))print(('pts_3d_extend shape: ', pts_3d_extend.shape))pts_2d = np.dot(pts_3d_extend, np.transpose(P)) # nx3pts_2d[:,0] /= pts_2d[:,2]pts_2d[:,1] /= pts_2d[:,2]return pts_2d[:,0:2]# corners_2d + corners_3d def compute_box_3d(obj, P):''' Takes an object and a projection matrix (P) and projects the 3dbounding box into the image plane.Returns:corners_2d: (8,2) array in left image coord.corners_3d: (8,3) array in in rect camera coord.'''# compute rotational matrix around yaw axisR = roty(obj.ry) # 3d bounding box dimensionsl = obj.l;w = obj.w;h = obj.h;# 3d bounding box cornersx_corners = [l/2,l/2,-l/2,-l/2,l/2,l/2,-l/2,-l/2];y_corners = [0,0,0,0,-h,-h,-h,-h];z_corners = [w/2,-w/2,-w/2,w/2,w/2,-w/2,-w/2,w/2];# rotate and translate 3d bounding boxcorners_3d = np.dot(R, np.vstack([x_corners,y_corners,z_corners]))#print corners_3d.shapecorners_3d[0,:] = corners_3d[0,:] + obj.t[0];corners_3d[1,:] = corners_3d[1,:] + obj.t[1];corners_3d[2,:] = corners_3d[2,:] + obj.t[2];#print 'cornsers_3d: ', corners_3d # only draw 3d bounding box for objs in front of the cameraif np.any(corners_3d[2,:]<0.1):corners_2d = Nonereturn corners_2d, np.transpose(corners_3d)# project the 3d bounding box into the image planecorners_2d = project_to_image(np.transpose(corners_3d), P);#print 'corners_2d: ', corners_2dreturn corners_2d, np.transpose(corners_3d)def compute_orientation_3d(obj, P):''' Takes an object and a projection matrix (P) and projects the 3dobject orientation vector into the image plane.Returns:orientation_2d: (2,2) array in left image coord.orientation_3d: (2,3) array in in rect camera coord.'''# compute rotational matrix around yaw axisR = roty(obj.ry)# orientation in object coordinate systemorientation_3d = np.array([[0.0, obj.l],[0,0],[0,0]])# rotate and translate in camera coordinate system, project in imageorientation_3d = np.dot(R, orientation_3d)orientation_3d[0,:] = orientation_3d[0,:] + obj.t[0]orientation_3d[1,:] = orientation_3d[1,:] + obj.t[1]orientation_3d[2,:] = orientation_3d[2,:] + obj.t[2]# vector behind image plane?if np.any(orientation_3d[2,:]<0.1):orientation_2d = Nonereturn orientation_2d, np.transpose(orientation_3d)# project orientation into the image planeorientation_2d = project_to_image(np.transpose(orientation_3d), P);return orientation_2d, np.transpose(orientation_3d)def draw_projected_box3d(image, qs, color=(255,255,255), thickness=2):''' Draw 3d bounding box in imageqs: (8,3) array of vertices for the 3d box in following order:1 -------- 0/| /|2 -------- 3 .| | | |. 5 -------- 4|/ |/6 -------- 7'''qs = qs.astype(np.int32)for k in range(0,4):# Ref: http://docs.enthought.com/mayavi/mayavi/auto/mlab_helper_functions.htmli,j=k,(k+1)%4# use LINE_AA for opencv3cv2.line(image, (qs[i,0],qs[i,1]), (qs[j,0],qs[j,1]), color, thickness, cv2.CV_AA)i,j=k+4,(k+1)%4 + 4cv2.line(image, (qs[i,0],qs[i,1]), (qs[j,0],qs[j,1]), color, thickness, cv2.CV_AA)i,j=k,k+4cv2.line(image, (qs[i,0],qs[i,1]), (qs[j,0],qs[j,1]), color, thickness, cv2.CV_AA)return image

3. 通過測試還在訓練,但是我的硬件設備較差,所以,訓練速度比較慢


總結

以上是生活随笔為你收集整理的深度学习——3D Fully Convolutional Network for Vehicle Detection in Point Cloud模型实现的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

黄色成人免费电影 | 国产一区二区高清视频 | 久久首页 | 欧美性色黄大片在线观看 | 久草在线免费看视频 | 日韩在线观看一区 | 国产区精品| 草莓视频在线观看免费观看 | 91av免费在线观看 | 91c网站色版视频 | 免费看成人av | av成人免费观看 | 国产一级片不卡 | 国产99久久久国产精品成人免费 | 丁香综合av | 在线观看视频色 | 久久久精品电影 | 久久五月婷婷综合 | 超碰国产97 | 日韩在线高清视频 | 免费a视频在线 | 五月色丁香 | 国产不卡免费 | 成人亚洲免费 | 亚洲国产精品成人av | 91女人18片女毛片60分钟 | 性日韩欧美在线视频 | 三级av免费观看 | 国产一区二区在线免费 | 国产在线 一区二区三区 | www.香蕉 | 欧美一二区在线 | 天天玩天天操天天射 | 日韩欧美69 | 久久婷婷精品 | 国产一区二区三区 在线 | 蜜臀av性久久久久蜜臀aⅴ涩爱 | 日韩精品在线观看视频 | 午夜久久久久久久久 | 久久不见久久见免费影院 | 久久人人爽人人 | 狠狠色丁香婷婷综合基地 | 在线看的av网站 | 久草视频精品 | 色婷婷av一区二 | 国产99久久九九精品 | 国产精品久久99精品毛片三a | 久久手机免费观看 | 亚洲精品午夜久久久久久久 | 亚洲va欧美va人人爽春色影视 | 国产不卡视频在线播放 | 亚洲.www | 天堂va在线高清一区 | 91av99| 深爱综合网 | 中文字幕在线观看免费高清电影 | 黄色在线免费观看网站 | 国产精品一区二区三区免费看 | 天天天天天天干 | av最新资源| 91成人免费在线 | 欧美日韩国产一区二 | zzijzzij亚洲日本少妇熟睡 | 久久最新网址 | 一色屋精品视频在线观看 | 黄色软件视频大全免费下载 | 精品在线免费视频 | 久草电影网| 欧美日韩高清不卡 | 久久99亚洲网美利坚合众国 | 久久亚洲免费视频 | 91视频免费视频 | 国产又粗又猛又黄视频 | 亚洲va天堂va欧美ⅴa在线 | 久久国产a | 啪啪小视频网站 | 国产精品乱码一区二三区 | 日韩欧美一区二区在线 | 成年人毛片在线观看 | 在线观看视频三级 | 亚洲一区二区三区精品在线观看 | 久久草在线精品 | 久久久午夜视频 | 玖玖玖在线观看 | 亚洲首页 | 最近更新的中文字幕 | 亚洲国产成人在线播放 | 欧美福利网址 | 久久久久激情视频 | 国产高清精品在线观看 | 日本中文字幕在线 | 视频一区二区精品 | 欧美精品乱码久久久久久按摩 | 亚洲精品一区二区精华 | 爱射综合| 91视频3p | 欧美性生爱 | 九九九九免费视频 | 免费看污的网站 | 在线观看一区 | 亚洲综合视频在线 | 97色婷婷人人爽人人 | 夜夜夜夜夜夜操 | 欧美精品在线观看免费 | 探花视频网站 | 欧美尹人 | 日日噜噜噜噜夜夜爽亚洲精品 | 黄色网在线免费观看 | 国产视频欧美视频 | 视频在线观看亚洲 | 国产午夜一区二区 | 国产精品久久久久久久久久久久久 | 九九九九色 | 日韩中文在线播放 | 日日久视频 | 久久经典国产视频 | 久久免费观看少妇a级毛片 久久久久成人免费 | 国产99中文字幕 | 1000部国产精品成人观看 | 国产视频中文字幕在线观看 | 99久久99久久精品免费 | 伊人色**天天综合婷婷 | 天堂资源在线观看视频 | 最近中文字幕高清字幕在线视频 | 亚洲精品一区中文字幕乱码 | 日本黄色免费在线观看 | 99精品免费 | 欧美一区二区三区在线视频观看 | 免费看的黄色录像 | 国产精品美女久久久久久2018 | 精品自拍av | 日韩激情中文字幕 | 免费观看的黄色片 | 成人99免费视频 | 99久久精品国产亚洲 | 久久av观看 | 久草在线视频在线观看 | 91试看 | www.xxxx欧美| 日韩免费在线 | 999免费视频 | 成人动图 | 天天激情天天干 | 九色琪琪久久综合网天天 | 国产91对白在线播 | 91在线视频观看 | 国产色综合天天综合网 | 国产成人久久 | 青草视频在线播放 | 人人干人人搞 | 久亚洲 | 日韩在线不卡av | 久久久久久久免费 | 一区二区日韩av | 麻豆视频91 | 亚洲综合视频在线播放 | 成人av电影免费观看 | 91精品国产91久久久久福利 | 在线观看中文字幕一区二区 | 国产系列在线观看 | 97视频播放 | 激情在线五月天 | 99麻豆久久久国产精品免费 | 欧美日韩精品在线免费观看 | 在线看污网站 | 婷婷色综 | 激情电影影院 | 中文字幕精品一区二区三区电影 | 国产精品白丝jk白祙 | 91成人在线视频 | 日韩伦理一区二区三区av在线 | 丁香久久综合 | 久久精品牌麻豆国产大山 | 夜夜骑天天操 | 国产在线1区 | 久久天天综合网 | 永久av免费在线观看 | 视频二区在线 | 欧美日韩一区二区免费在线观看 | 丁香五香天综合情 | 国产精品国产三级国产不产一地 | 在线看小早川怜子av | 91桃色在线观看视频 | 国产成人精品一区二区三区福利 | 免费在线色电影 | 在线成人一区 | wwxxx日本| 久久精品99久久 | 亚洲欧美成人综合 | 亚洲精品午夜一区人人爽 | 亚洲欧洲久久久 | 亚洲国产福利视频 | 五月婷色| 一级a性色生活片久久毛片波多野 | 日韩欧美一区二区在线 | 91大神精品视频在线观看 | 午夜精品久久久久99热app | 欧美激情视频免费看 | 911香蕉| 日韩精品一区二区不卡 | 五月天欧美精品 | 99 视频 高清| 久久久资源 | 精品一区三区 | 午夜精品一区二区国产 | 五月婷婷视频在线 | 国产精品视频久久久 | 日日夜夜骑 | 天天综合网在线观看 | 成人啊 v | 久久www免费视频 | 高清免费在线视频 | 激情视频一区二区三区 | 欧美一级片免费观看 | 国产精品免费久久久久 | 在线观看电影av | 在线观看国产成人av片 | 一级黄色a视频 | 天天干,夜夜操 | 国产精品69久久久久 | 亚洲精品视频一二三 | 高潮久久久| 丁香六月国产 | 啪啪动态视频 | 成人久久久电影 | 中文字幕亚洲不卡 | 亚洲最新合集 | 91视频电影 | 黄色com| 国产在线观看你懂的 | 黄色大片中国 | 91一区在线观看 | 国产精品久久久久久一区二区 | 婷婷在线视频 | 国产成人精品久 | 色诱亚洲精品久久久久久 | 亚洲日韩精品欧美一区二区 | 欧美另类美少妇69xxxx | 日韩精品无码一区二区三区 | 日韩免费大片 | 四虎影视精品 | 香蕉久久久久久av成人 | 亚洲精品乱码白浆高清久久久久久 | 成片免费观看视频999 | 99久久一区 | 国产视频美女 | 欧美性做爰猛烈叫床潮 | 国产视频首页 | 黄p在线播放 | 日韩成人欧美 | 99热超碰在线 | 欧美性生活免费看 | 97成人在线视频 | 狠色在线 | 国产97色 | 在线观看av中文字幕 | 天海翼一区二区三区免费 | 色婷婷亚洲 | 97在线视频网站 | 久久99精品久久久久婷婷 | 黄色免费网站下载 | 国产精品视频999 | 日本久久久久 | 国模精品一区二区三区 | 国产精品视频不卡 | 操天天操| 国产精品毛片久久久久久久久久99999999 | 日韩欧美高清 | 九草在线视频 | 亚洲综合欧美精品电影 | 日韩午夜小视频 | 黄色毛片网站在线观看 | 亚洲国产精品视频在线观看 | 91夫妻自拍| 精品久久网 | 91高清完整版在线观看 | 人人插人人看 | 免费看久久 | 色婷婷色| 特级黄录像视频 | 亚州av成人 | 日韩黄色在线电影 | 伊人色**天天综合婷婷 | 欧美人交a欧美精品 | 国产福利一区二区三区在线观看 | 亚洲精品tv久久久久久久久久 | 久久免费中文视频 | 99热在线观看免费 | 亚洲男模gay裸体gay | 麻豆94tv免费版 | 免费观看一区 | 麻豆传媒在线免费看 | 一区二区三区高清在线观看 | 操久久网| 中文字幕av有码 | 亚洲精品资源 | 黄色a大片| 成人理论在线观看 | 国产视频精品在线 | 中文字幕在线观看免费 | 婷婷久久综合网 | 日日夜夜噜噜噜 | 成人午夜精品 | 在线观av | av免费在线观 | 中文字幕刺激在线 | 黄色毛片视频免费 | 欧美日韩国产一区二 | 九色91在线 | 99精品一级欧美片免费播放 | 涩涩网站在线看 | 国产精品久久久久久久久久新婚 | 一区二区三区电影 | 国产免费三级在线观看 | 久久精品网| 色在线视频网 | 99re亚洲国产精品 | x99av成人免费 | 欧美日韩一级视频 | 亚洲精品午夜国产va久久成人 | 午夜三级大片 | 久久伊人91 | 992tv在线观看网站 | 蜜桃视频在线观看一区 | 四虎在线观看视频 | 毛片精品免费在线观看 | 亚洲香蕉在线观看 | 成人影片在线免费观看 | 日日干夜夜干 | 夜夜爽88888免费视频4848 | 亚洲春色综合另类校园电影 | 成年人免费av网站 | 国产美女免费 | 久久不卡av | 日韩欧美视频免费在线观看 | 久久这里只有精品1 | 成人av电影免费观看 | 久久精品一级片 | 97在线观看免费高清完整版在线观看 | 爱情影院aqdy鲁丝片二区 | 免费在线观看国产黄 | 天天射天天干天天操 | 日本韩国欧美在线观看 | 高清视频一区二区三区 | 精品毛片一区二区免费看 | av先锋影音少妇 | 婷婷丁香六月 | 日韩高清在线看 | 91视频在线观看免费 | 视频在线一区 | 字幕网资源站中文字幕 | 亚洲国产成人精品在线观看 | 国产精品久久久久久吹潮天美传媒 | 日韩在线 | 久久草网| 五月天六月色 | 美女视频免费精品 | 天天躁日日躁狠狠躁 | 亚洲精品乱码久久久久久9色 | av网在线观看 | 国产精品久久久久久久久久久免费 | 国产一二三精品 | 欧美色图p| 国产一级二级三级在线观看 | 亚洲国产欧美一区二区三区丁香婷 | 久久久久久久99 | 成人国产精品久久久久久亚洲 | 麻豆免费在线播放 | 午夜精品福利一区二区三区蜜桃 | 久久国产精品免费一区 | 国产三级国产精品国产专区50 | 在线免费观看国产精品 | 免费精品 | 精品人妖videos欧美人妖 | 国产综合香蕉五月婷在线 | 国产精品一区久久久久 | 91av欧美| 狠色狠色综合久久 | 日韩欧美综合精品 | av片中文字幕 | 久久99精品久久久久婷婷 | 国产精品短视频 | av黄色在线播放 | 在线看的av网站 | 在线免费视频a | 91在线观看黄 | 色国产在线 | 99精品国产亚洲 | 国产精品激情在线观看 | 午夜精品999 | 日韩综合第一页 | 在线观看视频福利 | 亚洲国内精品在线 | 亚洲天堂网在线观看视频 | 三级av中文字幕 | 国产尤物在线观看 | 亚洲精品av中文字幕在线在线 | 国产激情久久久 | 久久黄色美女 | 亚洲影视九九影院在线观看 | 亚洲无吗av| 久久人91精品久久久久久不卡 | 欧美夫妻性生活电影 | 国产精品青草综合久久久久99 | 亚洲欧美国产日韩在线观看 | 亚洲欧美综合 | 中文字幕精品www乱入免费视频 | 亚洲精品福利在线 | 在线天堂中文在线资源网 | 午夜12点| 久久久国产精品人人片99精片欧美一 | 国产成人av一区二区三区在线观看 | 五月婷婷激情综合网 | 97精品欧美91久久久久久 | 少妇av网 | 天天综合中文 | 999久久久精品视频 日韩高清www | 国产精品视频免费看 | 18岁免费看片 | 国产精品久久久久9999 | 久久久免费精品视频 | 精品一二三四视频 | 亚洲一区二区三区91 | 久99久视频 | 日韩黄色一区 | 99热.com| 99国内精品| 美女视频久久久 | 99久久精品国产欧美主题曲 | 亚洲精品乱码久久久久久久久久 | 亚洲欧美视频在线 | 奇米影视四色8888 | 中文字幕一区二区三区在线播放 | 亚洲国产网址 | 久艹在线免费观看 | 在线观看免费高清视频大全追剧 | 999成人| 国产一区免费视频 | 亚洲女人天堂成人av在线 | 色偷偷人人澡久久超碰69 | 国产69精品久久app免费版 | 免费网站在线观看成人 | 超碰在线1 | 99久久99久久免费精品蜜臀 | 国产一区二区三区免费在线观看 | 美女视频黄的免费的 | 久久看片网站 | 在线观看91 | 一区二区视频在线免费观看 | 国产小视频在线免费观看 | 国产色视频网站2 | 色噜噜狠狠狠狠色综合久不 | 日韩精品欧美专区 | 国产主播99 | 在线亚洲精品 | 久久精品国产精品亚洲精品 | 国产精品不卡在线播放 | 中文字幕电影在线 | 久久久影院官网 | 97视频总站| 国产成人精品国内自产拍免费看 | 91三级视频 | 成人午夜网址 | 成人久久亚洲 | 国产精品一区二区久久国产 | 91成年人在线观看 | 99九九免费视频 | 91精品欧美 | 久久久精品 一区二区三区 国产99视频在线观看 | 九九热1 | 亚洲日本国产精品 | 久久99国产综合精品 | 最新国产精品拍自在线播放 | 国产黄色电影 | 久久久久久黄色 | 久久www免费人成看片高清 | 亚洲国内在线 | 黄色三级网站在线观看 | 午夜婷婷综合 | 午夜精品av | 成人在线网站观看 | 人人爱人人添 | 激情av资源 | 色综合五月天 | 国产高清在线免费观看 | 911精品美国片911久久久 | 日韩,精品电影 | 亚洲精品视频免费观看 | 夜夜骑天天操 | 最近中文字幕在线中文高清版 | 高清免费在线视频 | 一级电影免费在线观看 | 99精品一级欧美片免费播放 | 亚州精品天堂中文字幕 | 亚洲人毛片 | 国产精品久久久久久久久久久杏吧 | 手机在线日韩视频 | 日韩色爱 | 国产视频日韩视频欧美视频 | 国产v视频 | 国产高清在线观看 | 九九综合九九 | 99久热在线精品视频观看 | 亚洲一区美女视频在线观看免费 | 国产一级二级三级在线观看 | 国产精品2018 | 91免费高清 | 久久人人爽人人片 | 国产精品久久久久一区二区三区 | 97av影院| 最近最新mv字幕免费观看 | 人人爽久久涩噜噜噜网站 | 国产午夜不卡 | 日本久久久久久科技有限公司 | 五月开心六月伊人色婷婷 | 色综合久久88色综合天天 | 天天摸天天舔天天操 | 国产成人久久av977小说 | 国精产品一二三线999 | 国产精品自在线 | 美女网站免费福利视频 | 色噜噜在线观看 | 18做爰免费视频网站 | 五月婷婷色 | 狠狠五月婷婷 | 午夜免费电影院 | 最新av在线网站 | 国产h在线播放 | 久精品在线| 亚洲精品网址在线观看 | 亚洲精品高清在线 | 91一区二区三区久久久久国产乱 | 天天草网站 | 久久99国产精品久久 | 欧美精品久久久久久久久免 | 色福利网| 激情网五月 | 在线免费黄色 | 色婷在线 | 亚洲视频第一页 | 中文字幕国产 | 日韩激情中文字幕 | 91在线操 | 国产最新精品视频 | 91免费视频国产 | 国产色视频123区 | 免费网站黄 | 国产91对白在线播 | 在线观看日韩专区 | 久久这里有 | 在线观看av的网站 | 91在线区 | 亚洲精品美女久久 | 伊人亚洲综合 | 97碰在线 | 在线观看国产日韩 | 国产综合在线视频 | 国产午夜不卡 | 波多野结衣电影一区二区三区 | 91在线麻豆| 毛片网站免费在线观看 | 九九免费观看全部免费视频 | 99麻豆视频 | 久久久精品二区 | 日韩二区三区 | 色香蕉视频| 国产精品99久久免费黑人 | 免费色网 | 久久久蜜桃一区二区 | 麻豆系列在线观看 | 久久社区视频 | 日韩国产欧美在线视频 | 日韩免费二区 | 久草在线视频在线 | 黄色小说免费在线观看 | 蜜臀久久99精品久久久酒店新书 | 欧美一级片播放 | 国内成人综合 | 一本色道久久综合亚洲二区三区 | 亚州性色 | 亚洲男女精品 | 日本女人逼| av福利第一导航 | 99视频免费看 | 99热亚洲精品 | 国产99久 | 国产 中文 日韩 欧美 | 国产精品第一 | 国产超碰在线观看 | 亚洲视频精品在线 | 国产精品永久在线观看 | 色婷婷免费视频 | 中文免费在线观看 | 日本韩国精品在线 | 欧美一级黄色视屏 | 国产一级二级在线播放 | 伊人国产视频 | 国产精品网站一区二区三区 | 亚洲另类xxxx | 亚洲国产精品久久久久久 | 精品成人久久 | 精品福利视频在线观看 | 国产成人av| 天天激情天天干 | 97在线观视频免费观看 | 日韩精品中文字幕av | 最近中文字幕完整高清 | 日韩大片在线看 | 久久国产成人午夜av影院潦草 | 国产精品久久一 | 99精品热视频只有精品10 | 久久色在线观看 | 在线播放第一页 | 日韩免费电影 | 视频二区在线 | 欧美精品免费在线 | 9在线观看免费高清完整 | av在线免费观看不卡 | 亚洲国产精品人久久电影 | 欧美性极品xxxx娇小 | 午夜精品区 | 黄色软件视频大全免费下载 | 日韩综合一区二区 | 国产91精品久久久久久 | 在线精品一区二区 | 久久婷婷一区二区三区 | 97超碰人人 | h动漫中文字幕 | 91伊人久久大香线蕉蜜芽人口 | 一区二区视频在线看 | 在线观看中文字幕dvd播放 | 国产九色视频在线观看 | 一级a性色生活片久久毛片波多野 | 国产精品免费观看视频 | 天天透天天插 | 在线观看91网站 | 激情网五月婷婷 | 在线观看91精品国产网站 | 综合在线亚洲 | 一级黄色大片在线观看 | 91视频久久久久久 | 蜜臀久久99精品久久久无需会员 | 在线精品播放 | 91看片淫黄大片在线播放 | 久久久久黄色 | 亚洲爱爱视频 | 日本中文在线观看 | av福利在线免费观看 | 免费看三级网站 | 黄色一级大片免费看 | 91视频免费 | 久久99精品国产99久久 | 婷婷久久五月天 | 国产成人精品一区二区三区在线 | 国产在线播放观看 | 五月天激情视频在线观看 | 久艹视频免费观看 | 欧美成人区 | 婷婷深爱五月 | 狠狠色网| 亚洲国产成人精品电影在线观看 | 久久嗨 | 免费久草视频 | 亚洲一级久久 | 日本精品在线 | 日本午夜在线观看 | 日韩国产欧美在线视频 | 五月网婷婷 | 色多多在线观看 | 中文字幕精品一区二区精品 | 综合久久五月天 | 久久99视频免费观看 | www.大网伊人| 天天色天天干天天色 | 69av久久 | 成人一区二区三区中文字幕 | 成人黄色毛片视频 | a视频在线观看 | 日韩高清黄色 | 91九色最新| 17婷婷久久www | 日韩在线首页 | 中国一级片在线 | 亚洲一级电影视频 | 久久久免费看片 | 午夜av免费看 | 久久夜色精品国产欧美乱极品 | 亚洲国产视频直播 | 精品国产成人av | 美女免费视频网站 | 免费黄在线观看 | 中文字幕亚洲欧美日韩2019 | 亚洲第五色综合网 | 欧美日韩高清一区二区 国产亚洲免费看 | 五月花激情 | 欧美精品在线观看免费 | 久久96国产精品久久99软件 | 久久国产日韩 | 麻豆免费视频 | 中文字幕有码在线观看 | 亚洲 欧美 另类人妖 | www.久艹 | 在线观看黄色 | www色com | 国产亚洲精品xxoo | 91丨九色丨高潮 | 国产亚洲成人网 | 久久久久北条麻妃免费看 | 一级黄色片毛片 | 日韩午夜视频在线观看 | 97视频在线看 | 嫩小bbbb摸bbb摸bbb | 免费看三级网站 | 亚州国产精品 | 久久夜色精品国产欧美一区麻豆 | 精品国产诱惑 | 国内精品久久久久久 | 欧美精品久久久久久久亚洲调教 | 久久成年人| 日韩av在线不卡 | 国产精品视频专区 | 欧美黑人巨大xxxxx | 日韩在线不卡 | 色综合天天色综合 | 国产中文在线播放 | 超碰在线97国产 | 偷拍精品一区二区三区 | 久久精品这里热有精品 | 在线观看911视频 | av电影一区二区三区 | 丰满少妇一级片 | 亚洲成人黄色在线观看 | 精品亚洲视频在线 | 久久精品一区二区三区国产主播 | 国产麻豆剧果冻传媒视频播放量 | www久久| 日本一区二区三区免费看 | 国产在线观看地址 | 99热这里只有精品8 久久综合毛片 | 狠狠的日| 一区二区在线影院 | 久久人人爽人人人人片 | 婷婷综合视频 | 啪啪免费视频网站 | 久久免费福利视频 | 日韩在线播放视频 | 91精品啪在线观看国产81旧版 | 日韩一三区 | 久久99精品久久只有精品 | 国产日本在线观看 | 香蕉视频在线视频 | 91成年视频 | 中文字幕第 | 国产91精品高清一区二区三区 | 国产91在线观看 | 久草视频精品 | 国产一区二区免费看 | 成人av一级片 | 亚洲国产午夜视频 | 国产老太婆免费交性大片 | 在线99| wwwwww国产 | 在线三级播放 | 四虎www | 久久久久免费电影 | 天天摸日日摸人人看 | 国产免费专区 | 成人av高清在线观看 | 九色精品免费永久在线 | 国产精品一码二码三码在线 | 欧美日韩精品在线观看视频 | 欧美精品在线观看免费 | 在线观看精品国产 | 麻豆视频免费看 | 国产精品区在线观看 | 黄色三级在线看 | 中文字幕在线久一本久 | 黄色毛片一级片 | 美女黄频网站 | av色影院 | 四虎在线永久免费观看 | 在线免费观看涩涩 | 亚洲午夜久久久综合37日本 | 天天曰夜夜操 | 天天综合婷婷 | 狠狠色伊人亚洲综合网站野外 | 日日麻批40分钟视频免费观看 | 99在线精品观看 | 人人澡超碰碰97碰碰碰软件 | 国产一级一片免费播放放 | 二区视频在线观看 | 国产精品剧情在线亚洲 | 久在线 | 欧美一二三区在线播放 | 久久国产精品99久久久久久老狼 | 日韩欧美视频在线观看免费 | 国内精品久久久精品电影院 | 免费色视频网站 | 黄色日批网站 | 亚洲成人国产精品 | 国产在线欧美在线 | 久久99深爱久久99精品 | 四虎成人精品永久免费av | 麻豆一级视频 | 天天射色综合 | 国产精品淫 | 一区二区三区在线免费观看 | 蜜桃av观看| 97品白浆高清久久久久久 | 欧美黄污视频 | 亚洲 综合 国产 精品 | 黄色aaaaa| 午夜91视频| 草久视频在线观看 | 麻豆果冻剧传媒在线播放 | 亚洲综合成人专区片 | 婷婷色视频 | 97av在线视频免费播放 | 成人免费视频网址 | 玖玖在线观看视频 | 国产一区二区三区高清播放 | 色偷偷人人澡久久超碰69 | 亚洲国产精品第一区二区 | 国产精品午夜久久久久久99热 | 国产精品久久久久久久电影 | 国产成人免费网站 | 亚洲国产成人在线播放 | 欧美日韩国产一区二区在线观看 | 久久视频免费在线观看 | 91精品免费在线视频 | av黄色在线播放 | 国产91在线 | 美洲 | 日韩亚洲国产中文字幕 | 中文字幕在线观看资源 | 色噜噜日韩精品一区二区三区视频 | 成人黄色电影视频 | 日韩毛片在线播放 | 久久国产免费视频 | 成人在线播放av | 97超碰在线免费观看 | 日本一区二区高清不卡 | 午夜12点 | 91av短视频 | 久久精品人 | 免费看成人av| 欧美亚洲成人免费 | 日韩视频中文字幕 | 操碰av | 91字幕| 人人干人人艹 | 欧美一级视频免费 | 极品国产91在线网站 | 日韩资源在线观看 | 日韩电影一区二区在线 | 日本久久视频 | 久久久综合 | 在线视频免费观看 | 在线视频 成人 | 亚洲欧美一区二区三区孕妇写真 | 最新av中文字幕 | av电影在线观看完整版一区二区 | 欧美尹人 | 成人av在线亚洲 | 成人在线免费视频观看 | av综合站 | 午夜精品久久久久久久99 | 国产午夜精品一区二区三区在线观看 | 狠狠色丁香婷婷综合久小说久 | 免费观看的黄色片 | 国产精品video爽爽爽爽 | 97免费| 91人人视频在线观看 | 日韩电影黄色 | 精品国产乱码一区二区三区在线 | 久久伊人91| 亚洲国产日韩在线 | 国产视 | 精品国产一区二区三区久久久蜜臀 | 日韩乱码中文字幕 | 9在线观看免费高清完整版在线观看明 | 久久免费高清视频 | 日韩狠狠操 | 久操中文字幕在线观看 | 国产亚洲人成网站在线观看 | 激情电影在线观看 | 久久久久免费网 | 色哟哟国产精品 | 色视频国产直接看 | 色丁香综合 | 午夜久久视频 | av视屏在线播放 | 日本女人的性生活视频 | 美国三级黄色大片 | 一区二区三区电影大全 | 天天射天天干 | 五月综合 | 亚洲少妇天堂 | 福利久久久 | 午夜12点 | 欧美va天堂va视频va在线 | 热久久免费国产视频 | 中文字幕精品一区二区精品 | 久久视频一区 | 日韩免费观看一区二区 | 国产精品久久久久久久久久久久久久 | 国产成人免费 | 日本资源中文字幕在线 | 国产乱老熟视频网88av | 国产手机视频在线播放 | 欧美成人xxxx | av电影免费在线看 | 日韩丝袜视频 | 在线va网站 | 亚洲激情网站免费观看 | 香蕉久久久久久av成人 | 在线观看亚洲视频 | 国产一区播放 | 在线中文视频 | 欧美吞精 | 亚洲高清在线视频 | 男女拍拍免费视频 | 亚洲资源视频 | 久久久精品国产一区二区 | 粉嫩av一区二区三区四区 | 国产精品视频永久免费播放 | 黄色影院在线观看 | 日韩激情精品 | 在线国产一区二区 | 99久高清在线观看视频99精品热在线观看视频 | 最近高清中文在线字幕在线观看 | 国产成人久久77777精品 | 狠狠色噜噜狠狠 | 在线观看日韩视频 | 一区二区视频在线免费观看 | 久久综合免费视频影院 | 在线观看日韩免费视频 | 综合色综合色 | 超碰在线免费97 | 91丨九色丨国产丨porny精品 | 综合久久影院 | 久久免费国产电影 | 精品国产中文字幕 | 成人黄色大片在线观看 | 99热亚洲精品 | 成人 国产 在线 | 国产黄网在线 | 精品在线观看免费 | 日韩精品大片 | 97精品国产一二三产区 | 午夜精品三区 | 久草视频中文 | 欧美巨大荫蒂茸毛毛人妖 | 亚洲精选在线 | 97夜夜澡人人双人人人喊 | 91免费黄视频 | 天天色天天艹 | 色99久久| 欧美黄色高清 | 99久高清在线观看视频99精品热在线观看视频 | 一区二区三区久久精品 | a天堂中文在线 | 在线成人一区二区 | 午夜视频一区二区 | 色大片免费看 | 亚洲免费资源 | 免费观看性生交大片3 | 国产精品久久久久影院日本 | 精品国产乱码久久久久久1区二区 | 国产精品乱码一区二区视频 | 米奇影视7777| 国产精品一二三 | 最近免费在线观看 | 狠狠地日 | 欧美日韩中文另类 | 精品毛片一区二区免费看 | 日韩久久片 | 国内成人精品2018免费看 | 99在线国产 | av观看网站 | 日韩中文字幕免费视频 | 一区二区三区在线不卡 | 欧美激情视频在线观看免费 | 一区二区不卡视频在线观看 | 97人人模人人爽人人喊网 | 一级全黄毛片 | 国产91综合一区在线观看 | 日韩xxx视频 | 成人综合婷婷国产精品久久免费 | 中文字幕在线观看视频免费 | 精品a在线| 国产美女久久 | 国产精品美女久久久久久免费 | 国产精品 久久 | 婷婷成人综合 | 中文字幕超清在线免费 | www.com黄 | 免费在线国产视频 | 欧美孕妇视频 |