简单交通灯识别Traffic-Light-Classify
整個項目源碼:GitHub
引言
前面我們講完交通標志的識別,現在我們開始嘗試來實現交通信號燈的識別
接下來我們將按照自己的思路來實現并完善整個Project.
在這個項目中,我們使用HSV色彩空間來識別交通燈,可以改善及提高的地方:
- 可以采用Faster-RCNN或SSD來實現交通燈的識別
首先我們第一步是導入數據,并在RGB及HSV色彩空間可視化部分數據。這里的數據,我們采用MIT自動駕駛課程的圖片,
總共三類:紅綠黃,1187張圖片,其中,723張紅色交通燈圖片,429張綠色交通燈圖片,35張黃色交通燈圖片。
導入庫
# import some libs import cv2 import os import glob import random import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg %matplotlib inline # Image data directories IMAGE_DIR_TRAINING = "traffic_light_images/training/" IMAGE_DIR_TEST = "traffic_light_images/test/"#load data def load_dataset(image_dir):'''This function loads in images and their labels and places them in a listimage_dir:directions where images stored'''im_list =[]image_types= ['red','yellow','green']#Iterate through each color folderfor im_type in image_types:file_lists = glob.glob(os.path.join(image_dir,im_type,'*'))print(len(file_lists))for file in file_lists:im = mpimg.imread(file)if not im is None:im_list.append((im,im_type))return im_list IMAGE_LIST = load_dataset(IMAGE_DIR_TRAINING) 723 35 429Visualize the data
這里可視化主要實現:
- 顯示圖像
- 打印出圖片的大小
- 打印出圖片對應的標簽
PreProcess Data
在導入了上述數據后,接下來我們需要標準化輸入及輸出
Input
從上圖,我們可以看出,每張圖片的大小并不一樣,我們需要標準化輸入
將每張圖圖片的大小resize成相同的大小,
因為對于分類任務來說,我們需要
在每張圖片上應用相同的算法,因此標準化圖像尤其重要
Output
這里我們的標簽數據是類別數據:‘red’,‘yellow’,‘green’,因此我們可以利用one_hot方法將類別數據轉換成數值數據
# 標準化輸入圖像,這里我們resize圖片大小為32x32x3,這里我們也可以對圖像進行裁剪、平移、旋轉 def standardize(image_list):'''This function takes a rgb image as input and return a standardized versionimage_list: image and label'''standard_list = []#Iterate through all the image-label pairsfor item in image_list:image = item[0]label = item[1]# Standardize the inputstandardized_im = standardize_input(image)# Standardize the output(one hot)one_hot_label = one_hot_encode(label)# Append the image , and it's one hot encoded label to the full ,processed list of image datastandard_list.append((standardized_im,one_hot_label))return standard_listdef standardize_input(image):#Resize all images to be 32x32x3standard_im = cv2.resize(image,(32,32))return standard_imdef one_hot_encode(label):#return the correct encoded label. '''# one_hot_encode("red") should return: [1, 0, 0]# one_hot_encode("yellow") should return: [0, 1, 0]# one_hot_encode("green") should return: [0, 0, 1]'''if label=='red':return [1,0,0]elif label=='yellow':return [0,1,0]else:return [0,0,1]Test your code
實現完了上述標準化代碼后,我們需要進一步確定我們的代碼是正確的,因此接下來我們可以實現一個函數來實現上述代碼功能的檢驗
用Python搭建自動化測試框架,我們需要組織用例以及測試執行,這里我們推薦Python的標準庫——unittest。
Test Passed
Standardized_Train_List = standardize(IMAGE_LIST)Feature Extraction
在這里我們將使用色彩空間、形狀分析及特征構造
RGB to HSV
#Visualize image_num = 0 test_im = Standardized_Train_List[image_num][0] test_label = Standardized_Train_List[image_num][1] #convert to hsv hsv = cv2.cvtColor(test_im, cv2.COLOR_RGB2HSV) # Print image label print('Label [red, yellow, green]: ' + str(test_label)) h = hsv[:,:,0] s = hsv[:,:,1] v = hsv[:,:,2] # Plot the original image and the three channels _, ax = plt.subplots(1, 4, figsize=(20,10)) ax[0].set_title('Standardized image') ax[0].imshow(test_im) ax[1].set_title('H channel') ax[1].imshow(h, cmap='gray') ax[2].set_title('S channel') ax[2].imshow(s, cmap='gray') ax[3].set_title('V channel') ax[3].imshow(v, cmap='gray') Label [red, yellow, green]: [1, 0, 0]<matplotlib.image.AxesImage at 0x7fb49ad71f28> # create feature ''' HSV即色相、飽和度、明度(英語:Hue, Saturation, Value),又稱HSB,其中B即英語:Brightness。色相(H)是色彩的基本屬性,就是平常所說的顏色名稱,如紅色、黃色等。 飽和度(S)是指色彩的純度,越高色彩越純,低則逐漸變灰,取0-100%的數值。 明度(V),亮度(L),取0-100%。''' def create_feature(rgb_image):'''Basic brightness featurergb_image : a rgb_image'''hsv = cv2.cvtColor(rgb_image,cv2.COLOR_RGB2HSV)sum_brightness = np.sum(hsv[:,:,2])area = 32*32avg_brightness = sum_brightness / area#Find the averagereturn avg_brightnessdef high_saturation_pixels(rgb_image,threshold=80):'''Returns average red and green content from high saturation pixelsUsually, the traffic light contained the highest saturation pixels in the image.The threshold was experimentally determined to be 80'''high_sat_pixels = []hsv = cv2.cvtColor(rgb,cv2.COLOR_RGB2HSV)for i in range(32):for j in range(32):if hsv[i][j][1] > threshold:high_sat_pixels.append(rgb_image[i][j])if not high_sat_pixels:return highest_sat_pixel(rgb_image)sum_red = 0sum_green = 0for pixel in high_sat_pixels:sum_red+=pixel[0]sum_green+=pixel[1]# use sum() instead of manually adding them upavg_red = sum_red / len(high_sat_pixels)avg_green = sum_green / len(high_sat_pixels)*0.8return avg_red,avg_green def highest_sat_pixel(rgb_image):'''Finds the highest saturation pixels, and checks if it has a higher greenor a higher red content'''hsv = cv2.cvtColor(rgb_image,cv2.COLOR_RGB2HSV)s = hsv[:,:,1]x,y = (np.unravel_index(np.argmax(s),s.shape))if rgb_image[x,y,0] > rgb_image[x,y,1]*0.9:return 1,0 #red has a higher contentreturn 0,1Test dataset
接下來我們導入測試集來看看,上述方法的測試精度
上述方法我們實現了:
1.求平均的brightness
2.求red及green的色彩飽和度
有人或許會提出疑問,為啥沒有進行yellow的判斷,因此我們作出以下的改善
reference url
這里部分閾值,我們直接參考WIKI上的數據:
Test
接下來我們選擇三張圖片來看看測試效果
img_red,img_yellow,img_green
img_test = [(img_red,'red'),(img_yellow,'yellow'),(img_green,'green')] standardtest = standardize(img_test)for img in standardtest:predicted_label = estimate_label(img[0],display = True)print('Predict label :',predicted_label)print('True label:',img[1]) Predict label : [1, 0, 0] True label: [1, 0, 0] Predict label : [0, 1, 0] True label: [0, 1, 0] Predict label : [0, 0, 1] True label: [0, 0, 1] # Using the load_dataset function in helpers.py # Load test data TEST_IMAGE_LIST = load_dataset(IMAGE_DIR_TEST)# Standardize the test data STANDARDIZED_TEST_LIST = standardize(TEST_IMAGE_LIST)# Shuffle the standardized test data random.shuffle(STANDARDIZED_TEST_LIST) 181 9 107Determine the Accuracy
接下來我們來看看咱們算法在測試集上的準確率。下面我們實現的代碼存儲所有的被錯分的圖片以及它們被預測的結果及真實標簽。
這些數據被存儲在MISCLASSIFIED.
總結
以上是生活随笔為你收集整理的简单交通灯识别Traffic-Light-Classify的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 迷宫问题代码详解
- 下一篇: uniapp - animation动画