日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

交通路标识别(毕业设计)

發(fā)布時間:2024/8/1 编程问答 37 豆豆
生活随笔 收集整理的這篇文章主要介紹了 交通路标识别(毕业设计) 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

概述:

代碼獲取:點我獲取
在TensorFlow中實現(xiàn)單鏡頭多盒檢測器(SSD),用于檢測和分類交通標志。該實現(xiàn)能夠在具有Intel Core i7-6700K的GTX 1080上實現(xiàn)40-45 fps。
請注意,此項目仍在進行中。現(xiàn)在的主要問題是模型過度擬合。

我目前正在先進行VOC2012的預培訓,然后進行交通標志檢測的轉(zhuǎn)移學習。目前只檢測到停車標志和人行橫道標志。檢測圖像示例如下。

依賴庫與代碼

Skip to content Product Solutions Open Source Pricing Search Sign in Sign up georgesung / ssd_tensorflow_traffic_sign_detection Public Code Issues 32 Pull requests Actions Projects Security Insights ssd_tensorflow_traffic_sign_detection/inference.py / @georgesung georgesung Removed unused function run_inference_old() Latest commit 88f1781 on Feb 15, 2017History1 contributor 189 lines (155 sloc) 6.08 KB''' Run inference using trained model ''' import tensorflow as tf from settings import * from model import SSDModel from model import ModelHelper from model import nms import numpy as np from sklearn.model_selection import train_test_split import cv2 import math import os import time import pickle from PIL import Image import matplotlib.pyplot as plt from moviepy.editor import VideoFileClip from optparse import OptionParser import globdef run_inference(image, model, sess, mode, sign_map):"""Run inference on a given imageArguments:* image: Numpy array representing a single RGB image* model: Dict of tensor references returned by SSDModel()* sess: TensorFlow session reference* mode: String of either "image", "video", or "demo"Returns:* Numpy array representing annotated image"""# Save original image in memoryimage = np.array(image)image_orig = np.copy(image)# Get relevant tensorsx = model['x']is_training = model['is_training']preds_conf = model['preds_conf']preds_loc = model['preds_loc']probs = model['probs']# Convert image to PIL Image, resize it, convert to grayscale (if necessary), convert back to numpy arrayimage = Image.fromarray(image)orig_w, orig_h = image.sizeif NUM_CHANNELS == 1:image = image.convert('L') # 8-bit grayscaleimage = image.resize((IMG_W, IMG_H), Image.LANCZOS) # high-quality downsampling filterimage = np.asarray(image)images = np.array([image]) # create a "batch" of 1 imageif NUM_CHANNELS == 1:images = np.expand_dims(images, axis=-1) # need extra dimension of size 1 for grayscale# Perform object detectiont0 = time.time() # keep track of duration of object detection + NMSpreds_conf_val, preds_loc_val, probs_val = sess.run([preds_conf, preds_loc, probs], feed_dict={x: images, is_training: False})if mode != 'video':print('Inference took %.1f ms (%.2f fps)' % ((time.time() - t0)*1000, 1/(time.time() - t0)))# Gather class predictions and confidence valuesy_pred_conf = preds_conf_val[0] # batch size of 1, so just take [0]y_pred_conf = y_pred_conf.astype('float32')prob = probs_val[0]# Gather localization predictionsy_pred_loc = preds_loc_val[0]# Perform NMSboxes = nms(y_pred_conf, y_pred_loc, prob)if mode != 'video':print('Inference + NMS took %.1f ms (%.2f fps)' % ((time.time() - t0)*1000, 1/(time.time() - t0)))# Rescale boxes' coordinates back to original image's dimensions# Recall boxes = [[x1, y1, x2, y2, cls, cls_prob], [...], ...]scale = np.array([orig_w/IMG_W, orig_h/IMG_H, orig_w/IMG_W, orig_h/IMG_H])if len(boxes) > 0:boxes[:, :4] = boxes[:, :4] * scale# Draw and annotate boxes over original image, and return annotated imageimage = image_origfor box in boxes:# Get box parametersbox_coords = [int(round(x)) for x in box[:4]]cls = int(box[4])cls_prob = box[5]# Annotate imageimage = cv2.rectangle(image, tuple(box_coords[:2]), tuple(box_coords[2:]), (0,255,0))label_str = '%s %.2f' % (sign_map[cls], cls_prob)image = cv2.putText(image, label_str, (box_coords[0], box_coords[1]), 0, 0.5, (0,255,0), 1, cv2.LINE_AA)return imagedef generate_output(input_files, mode):"""Generate annotated images, videos, or sample images, based on mode"""# First, load mapping from integer class ID to sign name stringsign_map = {}with open('signnames.csv', 'r') as f:for line in f:line = line[:-1] # strip newline at the endsign_id, sign_name = line.split(',')sign_map[int(sign_id)] = sign_namesign_map[0] = 'background' # class ID 0 reserved for background class# Create output directory 'inference_out/' if neededif mode == 'image' or mode == 'video':if not os.path.isdir('./inference_out'):try:os.mkdir('./inference_out')except FileExistsError:print('Error: Cannot mkdir ./inference_out')return# Launch the graphwith tf.Graph().as_default(), tf.Session() as sess:# "Instantiate" neural network, get relevant tensorsmodel = SSDModel()# Load trained modelsaver = tf.train.Saver()print('Restoring previously trained model at %s' % MODEL_SAVE_PATH)saver.restore(sess, MODEL_SAVE_PATH)if mode == 'image':for image_file in input_files:print('Running inference on %s' % image_file)image_orig = np.asarray(Image.open(image_file))image = run_inference(image_orig, model, sess, mode, sign_map)head, tail = os.path.split(image_file)plt.imsave('./inference_out/%s' % tail, image)print('Output saved in inference_out/')elif mode == 'video':for video_file in input_files:print('Running inference on %s' % video_file)video = VideoFileClip(video_file)video = video.fl_image(lambda x: run_inference(x, model, sess, mode, sign_map))head, tail = os.path.split(video_file)video.write_videofile('./inference_out/%s' % tail, audio=False)print('Output saved in inference_out/')elif mode == 'demo':print('Demo mode: Running inference on images in sample_images/')image_files = os.listdir('sample_images/')for image_file in image_files:print('Running inference on sample_images/%s' % image_file)image_orig = np.asarray(Image.open('sample_images/' + image_file))image = run_inference(image_orig, model, sess, mode, sign_map)plt.imshow(image)plt.show()else:raise ValueError('Invalid mode: %s' % mode)if __name__ == '__main__':# Configure command line optionsparser = OptionParser()parser.add_option('-i', '--input_dir', dest='input_dir',help='Directory of input videos/images (ignored for "demo" mode). Will run inference on all videos/images in that dir')parser.add_option('-m', '--mode', dest='mode', default='image',help='Operating mode, could be "image", "video", or "demo"; "demo" mode displays annotated images from sample_images/')# Get and parse command line optionsoptions, args = parser.parse_args()input_dir = options.input_dirmode = options.modeif mode != 'video' and mode != 'image' and mode != 'demo':assert ValueError('Invalid mode: %s' % mode)if mode != 'demo':input_files = glob.glob(input_dir + '/*.*')else:input_files = []generate_output(input_files, mode)

Python 3.5+
TensorFlow v0.12.0
Pickle
OpenCV Python
Matplotlib(可選)

運用

將此存儲庫克隆到某處,讓我們將其稱為$ROOT
從頭開始訓練模型:

代碼流程

※Download the LISA Traffic Sign Dataset, and store it in a directory $LISA_DATA ※cd $LISA_DATA ※Follow instructions in the LISA Traffic Sign Dataset to create 'mergedAnnotations.csv' such that only stop signs and pedestrian ※crossing signs are shown ※cp $ROOT/data_gathering/create_pickle.py $LISA_DATA ※python create_pickle.py ※cd $ROOT ※ln -s $LISA_DATA/resized_images_* . ※ln -s $LISA_DATA/data_raw_*.p . ※python data_prep.py ※This performs box matching between ground-truth boxes and default ※boxes, and packages the data into a format used later in the ※pipeline ※python train.py ※This trains the SSD model ※python inference.py -m demo

效果

如上所述,該SSD實現(xiàn)能夠在具有Intel Core i7 6700K的GTX 1080上實現(xiàn)40-45 fps。
推理時間是神經(jīng)網(wǎng)絡推理時間和非最大抑制(NMS)時間的總和。總的來說,神經(jīng)網(wǎng)絡推斷時間明顯小于NMS時間,神經(jīng)網(wǎng)絡推理時間通常在7-8ms之間,而NMS時間在15-16ms之間。這里實現(xiàn)的NMS算法尚未優(yōu)化,僅在CPU上運行,因此可以在那里進一步努力提高性能。

數(shù)據(jù)集

整個LISA交通標志數(shù)據(jù)集由47個不同的交通標志類別組成。因為我們只關注這些類的子集,所以我們只使用LISA數(shù)據(jù)集的子集。此外,我們忽略了沒有找到匹配的默認框的所有訓練樣本,從而進一步減小了數(shù)據(jù)集的大小。由于這個過程,我們最終只能處理很少的數(shù)據(jù)。
為了改進這一問題,我們可以執(zhí)行圖像數(shù)據(jù)增強,和/或在更大的數(shù)據(jù)集上預訓練模型(例如VOC2012、ILSVRC)
下載鏈接:`點擊下載

代碼可私信
代碼可私信
代碼可私信

總結

以上是生活随笔為你收集整理的交通路标识别(毕业设计)的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 成人小网站| 成人深夜视频 | 人人人插 | www.99re7.com | 中国免费毛片 | 天堂影院一区二区 | 里番精品3d一二三区 | 久草香蕉在线 | 伊人网综合在线 | 四虎免看黄 | 中文字幕人妻丝袜二区 | 操操久久 | 精品国产高清在线观看 | 豆花av在线 | 亚洲AV成人午夜无码精品久久 | 日韩精品无码一区二区三区 | 欧美一区二区三 | 99re这里| 亚洲一级色 | av手机天堂 | 777精品伊人久久久久大香线蕉 | 激情五月在线观看 | 噜噜噜久久,亚洲精品国产品 | 污视频免费网站 | 三上悠亚 在线观看 | 久久久久免费精品视频 | 99久久亚洲精品 | 在线免费观看高清视频 | 一区二区视频在线观看 | 男人女人拔萝卜视频 | 欧美浓毛大泬视频 | 丝袜诱惑一区二区 | 亚洲一区二区视频在线观看 | 免费网站在线观看黄色 | 国产爽爽爽 | 在线看欧美 | 一级黄色性生活片 | 国产吧在线 | 黄网站免费大全入口 | 天堂av亚洲 | 人人妻人人澡人人爽人人dvd | 一级生活毛片 | 爽爽淫人| 毛片av免费 | 羞辱狗奴的句子有哪些 | 精品在线一区二区三区 | 欧美久久久久 | 毛片基地免费 | 久久一视频 | 日韩精品视频久久 | 激情国产一区 | 国产一区二区在线观看免费 | 九九热国产精品视频 | 日批在线观看 | 日本va欧美va欧美va精品 | 亚洲国产黄色av | 国产剧情久久 | 永久免费看黄网站 | 国产一区二区在线视频观看 | 亚洲成人av电影在线 | 在线精品一区二区三区 | 色爽爽一区二区三区 | 久99久视频 | 国产白丝一区二区三区 | 亚洲一区二区三区四区五区午夜 | 一区二区国产精品精华液 | 国产精品亚洲综合 | 王者后宫yin肉h文催眠 | 中文在线а√在线 | 黄色污污网站在线观看 | 韩国三级hd中文字幕有哪些 | 最新黄色av网址 | 日本在线视频一区二区三区 | 琪琪原网址| 日本中文字幕在线看 | 尹人香蕉| 天堂无乱码 | 欧美绿帽合集videosex | 亚洲不卡视频在线观看 | 国产伦精品一区二区三区照片91 | 在线观看网站黄 | 亚洲精品乱码久久久久久蜜桃动漫 | 干日本少妇首页 | 中文字幕在线观看免费视频 | 97精品国产 | 高清国产一区二区 | 欧美大奶在线 | www.超碰在线.com | 在线高清观看免费观看 | 最新视频 - 8mav | 久久久久1| 黑名单上的人全集免费观看 | 美女激情网站 | 中国美女洗澡免费看网站 | 欧美日韩中文在线 | 大地av | 亚洲婷婷在线 | 无码人妻一区二区三区在线视频 | 色黄大色黄女片免费中国 |