日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人工智能 > 目标检测 >内容正文

目标检测

深度学习和目标检测系列教程 12-300:常见的opencv的APi和用法总结

發布時間:2024/10/8 目标检测 142 豆豆
生活随笔 收集整理的這篇文章主要介紹了 深度学习和目标检测系列教程 12-300:常见的opencv的APi和用法总结 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

@Author:Runsen

由于CV需要熟練使用opencv,因此總結了opencv常見的APi和用法。

OpenCV(opensourcecomputervision)于1999年正式推出,它來自英特爾的一項倡議。

  • OpenCV的核心是用C++編寫的。在Python中,我們只使用一個包裝器,它在Python內部執行C++代碼。

  • 它對于幾乎所有的計算機視覺應用程序都非常有用,并且在Windows、Linux、MacOS、Android、iOS上受支持,并綁定到Python、Java和Matlab。

銳化

USM銳化的全稱是:Unsharp Mask,譯為「模糊掩蓋銳化處理」,是一種膠片時代處理圖片銳度的手法,延續到數碼時代的產物。在膠片時代,我們通過將模糊的負片與正片疊加可產生邊緣銳化的效果。

對,銳化的效果離不開模糊,甚至可以說,銳化的效果就是來源于模糊。USM的銳化實際上就是利用原圖和模糊圖產生的反差,來實現銳化圖片的效果。

公式:(源圖像– w*高斯模糊)/(1-w);其中w表示權重(0.1~0.9)。

我感覺我喜歡上,畢業前在學校自拍的照片

import numpy as np import matplotlib.pyplot as plt import cv2image = cv2.imread('demo.jpg') image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)plt.figure(figsize=(20, 20)) plt.subplot(1, 2, 1) plt.title("Original") plt.imshow(image)# Create our shapening kernel # the values in the matrix sum to 1 kernel_sharpening = np.array([[-1,-1,-1], [-1,9,-1], [-1,-1,-1]])# 對輸入圖像應用不同的內核 sharpened = cv2.filter2D(image, -1, kernel_sharpening)plt.subplot(1, 2, 2) plt.title("Image Sharpening") plt.imshow(sharpened)plt.show()

閾值化、二值化

image = cv2.imread('demo.jpg', 0)plt.figure(figsize=(30, 30)) plt.subplot(3, 2, 1) plt.title("Original") plt.imshow(image)# 小于127的值變為0(黑色,大于等于255(白色) ret,thresh1 = cv2.threshold(image, 127, 255, cv2.THRESH_BINARY)plt.subplot(3, 2, 2) plt.title("Threshold Binary") plt.imshow(thresh1)# 模糊圖像,消除噪音 image = cv2.GaussianBlur(image, (3, 3), 0)# adaptiveThreshold thresh = cv2.adaptiveThreshold(image, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 3, 5) plt.subplot(3, 2, 3) plt.title("Adaptive Mean Thresholding") plt.imshow(thresh)_, th2 = cv2.threshold(image, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)plt.subplot(3, 2, 4) plt.title("Otsu's Thresholding") plt.imshow(th2)plt.subplot(3, 2, 5) # 高斯濾波后的大津閾值法 blur = cv2.GaussianBlur(image, (5,5), 0) _, th3 = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) plt.title("Guassian Otsu's Thresholding") plt.imshow(th3) plt.show()

降噪

image = cv2.imread('demo.jpg') image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)plt.figure(figsize=(20, 20)) plt.subplot(3, 2, 1) plt.title("Original") plt.imshow(image)# Let's define our kernel size kernel = np.ones((5,5), np.uint8)# Now we erode erosion = cv2.erode(image, kernel, iterations = 1)plt.subplot(3, 2, 2) plt.title("Erosion") plt.imshow(erosion)dilation = cv2.dilate(image, kernel, iterations = 1) plt.subplot(3, 2, 3) plt.title("Dilation") plt.imshow(dilation)# Opening - Good for removing noise opening = cv2.morphologyEx(image, cv2.MORPH_OPEN, kernel) plt.subplot(3, 2, 4) plt.title("Opening") plt.imshow(opening)# Closing - Good for removing noise closing = cv2.morphologyEx(image, cv2.MORPH_CLOSE, kernel) plt.subplot(3, 2, 5) plt.title("Closing") plt.imshow(closing)

邊緣檢測與圖像梯度

image = cv2.imread('demo.jpg') image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)height, width,_ = image.shape# Extract Sobel Edges sobel_x = cv2.Sobel(image, cv2.CV_64F, 0, 1, ksize=5) sobel_y = cv2.Sobel(image, cv2.CV_64F, 1, 0, ksize=5)plt.figure(figsize=(20, 20))plt.subplot(3, 2, 1) plt.title("Original") plt.imshow(image)plt.subplot(3, 2, 2) plt.title("Sobel X") plt.imshow(sobel_x)plt.subplot(3, 2, 3) plt.title("Sobel Y") plt.imshow(sobel_y)sobel_OR = cv2.bitwise_or(sobel_x, sobel_y)plt.subplot(3, 2, 4) plt.title("sobel_OR") plt.imshow(sobel_OR)laplacian = cv2.Laplacian(image, cv2.CV_64F)plt.subplot(3, 2, 5) plt.title("Laplacian") plt.imshow(laplacian)## 提供兩個值:threshold1和threshold2。任何大于threshold2的梯度值。低于threshold1的任何值都不被視為邊。 # threshold1和threshold2之間的值可以根據其大小分類為邊或非邊 # 在這種情況下,低于60的任何漸變值都被視為非邊 # 而大于120的任何值都被視為邊。 # The first threshold gradient canny = cv2.Canny(image, 50, 120)plt.subplot(3, 2, 6) plt.title("Canny") plt.imshow(canny)

透視變換

image = cv2.imread('scan.jpg') image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)plt.figure(figsize=(20, 20))plt.subplot(1, 2, 1) plt.title("Original") plt.imshow(image)# 原始圖像四個角的坐標 points_A = np.float32([[320,15], [700,215], [85,610], [530,780]])# 所需輸出的4個角的坐標 # 使用A4紙的比例是1:1.41 points_B = np.float32([[0,0], [420,0], [0,594], [420,594]])# 使用兩組四個點進行計算 # 透視變換矩陣,M M = cv2.getPerspectiveTransform(points_A, points_B)warped = cv2.warpPerspective(image, M, (420,594))plt.subplot(1, 2, 2) plt.title("warpPerspective") plt.imshow(warped)

縮放、重新調整大小和插值

使用cv2.resize函數可以很容易地重新調整大小,它的參數有:cv2.resize(image,dsize(output image size),x scale,y scale,interpolation)

image = cv2.imread('demo.jpg') image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)plt.figure(figsize=(20, 20))plt.subplot(2, 2, 1) plt.title("Original") plt.imshow(image)# Let's make our image 3/4 of it's original size image_scaled = cv2.resize(image, None, fx=0.75, fy=0.75)plt.subplot(2, 2, 2) plt.title("Scaling - Linear Interpolation") plt.imshow(image_scaled)# Let's double the size of our image img_scaled = cv2.resize(image, None, fx=2, fy=2, interpolation = cv2.INTER_CUBIC)plt.subplot(2, 2, 3) plt.title("Scaling - Cubic Interpolation") plt.imshow(img_scaled)# Let's skew the re-sizing by setting exact dimensions img_scaled = cv2.resize(image, (900, 400), interpolation = cv2.INTER_AREA)plt.subplot(2, 2, 4) plt.title("Scaling - Skewed Size") plt.imshow(img_scaled)

影像金字塔

在目標檢測中縮放圖像時非常有用。

image = cv2.imread('demo.jpg') image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)plt.figure(figsize=(20, 20))plt.subplot(2, 2, 1) plt.title("Original") plt.imshow(image)smaller = cv2.pyrDown(image) larger = cv2.pyrUp(image)plt.subplot(2, 2, 2) plt.title("Smaller") plt.imshow(smaller)plt.subplot(2, 2, 3) plt.title("Larger") plt.imshow(larger)

裁剪

image = cv2.imread('demo.jpg') image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)plt.figure(figsize=(20, 20))plt.subplot(2, 2, 1) plt.title("Original") plt.imshow(image)height, width = image.shape[:2]# Let's get the starting pixel coordiantes (top left of cropping rectangle) start_row, start_col = int(height * .25), int(width * .25)# Let's get the ending pixel coordinates (bottom right) end_row, end_col = int(height * .75), int(width * .75)# Simply use indexing to crop out the rectangle we desire cropped = image[start_row:end_row , start_col:end_col]plt.subplot(2, 2, 2) plt.title("Cropped") plt.imshow(cropped)

模糊

image = cv2.imread('demo.jpg') image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)plt.figure(figsize=(20, 20))plt.subplot(2, 2, 1) plt.title("Original") plt.imshow(image)# Creating our 3 x 3 kernel kernel_3x3 = np.ones((3, 3), np.float32) / 9# We use the cv2.fitler2D to conovlve the kernal with an image blurred = cv2.filter2D(image, -1, kernel_3x3)plt.subplot(2, 2, 2) plt.title("3x3 Kernel Blurring") plt.imshow(blurred)# Creating our 7 x 7 kernel kernel_7x7 = np.ones((7, 7), np.float32) / 49blurred2 = cv2.filter2D(image, -1, kernel_7x7)plt.subplot(2, 2, 3) plt.title("7x7 Kernel Blurring") plt.imshow(blurred2)

Contours

# Let's load a simple image with 3 black squares image = cv2.imread('demo.jpg') image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)plt.figure(figsize=(20, 20))plt.subplot(2, 2, 1) plt.title("Original") plt.imshow(image)# Grayscale gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)# Find Canny edges edged = cv2.Canny(gray, 30, 200)plt.subplot(2, 2, 2) plt.title("Canny Edges") plt.imshow(edged)# Finding Contours # Use a copy of your image e.g. edged.copy(), since findContours alters the image contours, hierarchy = cv2.findContours(edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)plt.subplot(2, 2, 3) plt.title("Canny Edges After Contouring") plt.imshow(edged)print("Number of Contours found = " + str(len(contours)))# Draw all contours # Use '-1' as the 3rd parameter to draw all cv2.drawContours(image, contours, -1, (0,255,0), 3)plt.subplot(2, 2, 4) plt.title("Contours") plt.imshow(image)

總結

以上是生活随笔為你收集整理的深度学习和目标检测系列教程 12-300:常见的opencv的APi和用法总结的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。