android opencv 获取小图在大图的坐标_Android开发—基于OpenCV实现相机实时图像识别跟踪...
利用OpenCV實(shí)現(xiàn)實(shí)時(shí)圖像識(shí)別和圖像跟蹤
圖像識(shí)別
什么是圖像識(shí)別
圖像識(shí)別,是指利用計(jì)算機(jī)對(duì)圖像進(jìn)行處理、分析和理解,以識(shí)別各種不同模式的目標(biāo)和對(duì)像的技術(shù)。根據(jù)觀測到的圖像,對(duì)其中的物體分辨其類別,做出有意義的判斷。利用現(xiàn)代信息處理與計(jì)算技術(shù)來模擬和完成人類的認(rèn)識(shí)、理解過程。一般而言,一個(gè)圖像識(shí)別系統(tǒng)主要由三個(gè)部分組成,分別是:圖像分割、圖像特征提取以及分類器的識(shí)別分類。
其中,圖像分割將圖像劃分為多個(gè)有意義的區(qū)域,然后將每個(gè)區(qū)域的圖像進(jìn)行特征提取,最后分類器根據(jù)提取的圖像特征對(duì)圖像進(jìn)行相對(duì)應(yīng)的分類。實(shí)際上,圖像識(shí)別和圖像分割并不存在嚴(yán)格的界限。從某種意義上,圖像分割的過程就是圖像識(shí)別的過程。圖像分割著重于對(duì)象和背景的關(guān)系,研究的是對(duì)象在特定背景下所表現(xiàn)出來的整體屬性,而圖像識(shí)別則著重于對(duì)象本身的屬性。
圖像識(shí)別的研究現(xiàn)狀
圖像識(shí)別的發(fā)展經(jīng)歷了三個(gè)階段:文字識(shí)別、數(shù)字圖像處理與識(shí)別、物體識(shí)別。
圖像識(shí)別作為計(jì)算視覺技術(shù)體系中的重要一環(huán),一直備受重視。微軟在兩年前就公布了一項(xiàng)里程碑式的成果:它的圖像系統(tǒng)識(shí)別圖片的錯(cuò)誤率比人類還要低。如今,圖像識(shí)別技術(shù)又發(fā)展到一個(gè)新高度。這有賴于更多數(shù)據(jù)的開放、更多基礎(chǔ)工具的開源、產(chǎn)業(yè)鏈的更新迭代,以及高性能的AI計(jì)算芯片、深度攝像頭和優(yōu)秀的深度學(xué)習(xí)算法等的進(jìn)步,這些都為圖像識(shí)別技術(shù)向更深處發(fā)展提供了源源不斷的動(dòng)力。
其實(shí)對(duì)于圖像識(shí)別技術(shù),大家已經(jīng)不陌生,人臉識(shí)別、虹膜識(shí)別、指紋識(shí)別等都屬于這個(gè)范疇,但是圖像識(shí)別遠(yuǎn)不只如此,它涵蓋了生物識(shí)別、物體與場景識(shí)別、視頻識(shí)別三大類。發(fā)展至今,盡管與理想還相距甚遠(yuǎn),但日漸成熟的圖像識(shí)別技術(shù)已開始探索在各類行業(yè)的應(yīng)用。
Android圖像識(shí)別相關(guān)技術(shù)
基于BSD許可(開源)發(fā)行的跨平臺(tái)計(jì)算機(jī)視覺庫,可以運(yùn)行在Linux、Windows、Android和Mac OS操作系統(tǒng)上。
輕量級(jí)而且高效——由一系列 C 函數(shù)和少量 C++ 類構(gòu)成,同時(shí)提供了Python、Ruby、MATLAB等語言的接口,實(shí)現(xiàn)了圖像處理和計(jì)算機(jī)視覺方面的很多通用算法
TensorFlow是一個(gè)深度學(xué)習(xí)框架,支持Linux平臺(tái),Windows平臺(tái),Mac平臺(tái),甚至手機(jī)移動(dòng)設(shè)備等各種平臺(tái)。
TensorFlow提供了非常豐富的深度學(xué)習(xí)相關(guān)的API,可以說目前所有深度學(xué)習(xí)框架里,提供的API最全的,包括基本的向量矩陣計(jì)算、各種優(yōu)化算法、各種卷積神經(jīng)網(wǎng)絡(luò)和循環(huán)神經(jīng)網(wǎng)絡(luò)基本單元的實(shí)現(xiàn)、以及可視化的輔助工具、等等。
YOLO (You Only Look Once)是一種快速和準(zhǔn)確的實(shí)時(shí)對(duì)象檢測算法。
YOLOv3 在 TensorFlow 中實(shí)現(xiàn)的完整數(shù)據(jù)管道。它可用在數(shù)據(jù)集上來訓(xùn)練和評(píng)估自己的目標(biāo)檢測模型。
基于OpenCV實(shí)現(xiàn)
介紹使用OpenCV來實(shí)現(xiàn)指定圖像識(shí)別的DEMO:
實(shí)現(xiàn)思路
①打開應(yīng)用的同時(shí)開啟攝像頭
②對(duì)實(shí)時(shí)攝像頭拍攝的圖像封裝成MAT對(duì)象進(jìn)行逐幀比對(duì):
代碼部分
權(quán)限設(shè)置
AndroidMainifest.xml
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature
android:name="android.hardware.camera.autofocus"
android:required="false" />
<uses-feature
android:name="android.hardware.camera.flash"
android:required="false" />
權(quán)限提示方法
private void requestPermissions() {
final int REQUEST_CODE = 1;
if (ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this, new String[]{
Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE},
REQUEST_CODE);
}
}
界面設(shè)計(jì)
activity_img_recognition.xml
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:opencv="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/activity_img_recognition"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context="com.sueed.imagerecognition.CameraActivity">
<org.opencv.android.JavaCameraView
android:id="@+id/jcv"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:visibility="gone"
opencv:camera_id="any"
opencv:show_fps="true" />
</RelativeLayout>
主要邏輯代碼
CameraActivity.java 【相機(jī)啟動(dòng)獲取圖像和包裝MAT相關(guān)】
因?yàn)镺penCV中JavaCameraView繼承自SurfaceView,若有需要可以自定義編寫extends SurfaceView implements SurfaceHolder.Callback的xxxSurfaceView替換使用。
package com.sueed.imagerecognition;
import android.Manifest;
import android.content.Intent;
import android.content.pm.PackageManager;
import android.os.Bundle;
import android.util.Log;
import android.view.Menu;
import android.view.MenuItem;
import android.view.SurfaceView;
import android.view.View;
import android.view.WindowManager;
import android.widget.ImageView;
import android.widget.RelativeLayout;
import android.widget.Toast;
import androidx.appcompat.app.AppCompatActivity;
import androidx.core.app.ActivityCompat;
import androidx.core.content.ContextCompat;
import com.sueed.imagerecognition.filters.Filter;
import com.sueed.imagerecognition.filters.NoneFilter;
import com.sueed.imagerecognition.filters.ar.ImageDetectionFilter;
import com.sueed.imagerecognition.imagerecognition.R;
import org.opencv.android.CameraBridgeViewBase;
import org.opencv.android.CameraBridgeViewBase.CvCameraViewFrame;
import org.opencv.android.CameraBridgeViewBase.CvCameraViewListener2;
import org.opencv.android.JavaCameraView;
import org.opencv.android.OpenCVLoader;
import org.opencv.core.Mat;
import java.io.IOException;
// Use the deprecated Camera class.
@SuppressWarnings("deprecation")
public final class CameraActivity extends AppCompatActivity implements CvCameraViewListener2 {
// A tag for log output.
private static final String TAG = CameraActivity.class.getSimpleName();
// The filters.
private Filter[] mImageDetectionFilters;
// The indices of the active filters.
private int mImageDetectionFilterIndex;
// The camera view.
private CameraBridgeViewBase mCameraView;
@Override
protected void onCreate(final Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);
//init CameraView
mCameraView = new JavaCameraView(this, 0);
mCameraView.setMaxFrameSize(size.MaxWidth, size.MaxHeight);
mCameraView.setCvCameraViewListener(this);
setContentView(mCameraView);
requestPermissions();
mCameraView.enableView();
}
@Override
public void onPause() {
if (mCameraView != null) {
mCameraView.disableView();
}
super.onPause();
}
@Override
public void onResume() {
super.onResume();
OpenCVLoader.initDebug();
}
@Override
public void onDestroy() {
if (mCameraView != null) {
mCameraView.disableView();
}
super.onDestroy();
}
@Override
public boolean onCreateOptionsMenu(final Menu menu) {
getMenuInflater().inflate(R.menu.activity_camera, menu);
return true;
}
@Override
public boolean onOptionsItemSelected(final MenuItem item) {
switch (item.getItemId()) {
case R.id.menu_next_image_detection_filter:
mImageDetectionFilterIndex++;
if (mImageDetectionFilters != null && mImageDetectionFilterIndex == mImageDetectionFilters.length) {
mImageDetectionFilterIndex = 0;
}
return true;
default:
return super.onOptionsItemSelected(item);
}
}
@Override
public void onCameraViewStarted(final int width, final int height) {
Filter Enkidu = null;
try {
Enkidu = new ImageDetectionFilter(CameraActivity.this, R.drawable.enkidu);
} catch (IOException e) {
e.printStackTrace();
}
Filter akbarHunting = null;
try {
akbarHunting = new ImageDetectionFilter(CameraActivity.this, R.drawable.akbar_hunting_with_cheetahs);
} catch (IOException e) {
Log.e(TAG, "Failed to load drawable: " + "akbar_hunting_with_cheetahs");
e.printStackTrace();
}
mImageDetectionFilters = new Filter[]{
new NoneFilter(),
Enkidu,
akbarHunting
};
}
@Override
public void onCameraViewStopped() {
}
@Override
public Mat onCameraFrame(final CvCameraViewFrame inputFrame) {
final Mat rgba = inputFrame.rgba();
if (mImageDetectionFilters != null) {
mImageDetectionFilters[mImageDetectionFilterIndex].apply(rgba, rgba);
}
return rgba;
}
}
ImageRecognitionFilter.java【圖像特征過濾比對(duì)及繪制追蹤綠框】
package com.nummist.secondsight.filters.ar;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import org.opencv.android.Utils;
import org.opencv.calib3d.Calib3d;
import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.DMatch;
import org.opencv.core.KeyPoint;
import org.opencv.core.Mat;
import org.opencv.core.MatOfDMatch;
import org.opencv.core.MatOfKeyPoint;
import org.opencv.core.MatOfPoint;
import org.opencv.core.MatOfPoint2f;
import org.opencv.core.Point;
import org.opencv.core.Scalar;
import org.opencv.features2d.DescriptorExtractor;
import org.opencv.features2d.DescriptorMatcher;
import org.opencv.features2d.FeatureDetector;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
import android.content.Context;
import com.nummist.secondsight.filters.Filter;
public final class ImageDetectionFilter implements Filter {
// The reference image (this detector's target).
private final Mat mReferenceImage;
// Features of the reference image.
private final MatOfKeyPoint mReferenceKeypoints = new MatOfKeyPoint();
// Descriptors of the reference image's features.
private final Mat mReferenceDescriptors = new Mat();
// The corner coordinates of the reference image, in pixels.
// CvType defines the color depth, number of channels, and
// channel layout in the image. Here, each point is represented
// by two 32-bit floats.
private final Mat mReferenceCorners = new Mat(4, 1, CvType.CV_32FC2);
// Features of the scene (the current frame).
private final MatOfKeyPoint mSceneKeypoints = new MatOfKeyPoint();
// Descriptors of the scene's features.
private final Mat mSceneDescriptors = new Mat();
// Tentative corner coordinates detected in the scene, in
// pixels.
private final Mat mCandidateSceneCorners = new Mat(4, 1, CvType.CV_32FC2);
// Good corner coordinates detected in the scene, in pixels.
private final Mat mSceneCorners = new Mat(0, 0, CvType.CV_32FC2);
// The good detected corner coordinates, in pixels, as integers.
private final MatOfPoint mIntSceneCorners = new MatOfPoint();
// A grayscale version of the scene.
private final Mat mGraySrc = new Mat();
// Tentative matches of scene features and reference features.
private final MatOfDMatch mMatches = new MatOfDMatch();
// A feature detector, which finds features in images.
private final FeatureDetector mFeatureDetector = FeatureDetector.create(FeatureDetector.ORB);
// A descriptor extractor, which creates descriptors of
// features.
private final DescriptorExtractor mDescriptorExtractor = DescriptorExtractor.create(DescriptorExtractor.ORB);
// A descriptor matcher, which matches features based on their
// descriptors.
private final DescriptorMatcher mDescriptorMatcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMINGLUT);
// The color of the outline drawn around the detected image.
private final Scalar mLineColor = new Scalar(0, 255, 0);
public ImageDetectionFilter(final Context context, final int referenceImageResourceID) throws IOException {
// Load the reference image from the app's resources.
// It is loaded in BGR (blue, green, red) format.
mReferenceImage = Utils.loadResource(context, referenceImageResourceID, Imgcodecs.CV_LOAD_IMAGE_COLOR);
// Create grayscale and RGBA versions of the reference image.
final Mat referenceImageGray = new Mat();
Imgproc.cvtColor(mReferenceImage, referenceImageGray, Imgproc.COLOR_BGR2GRAY);
Imgproc.cvtColor(mReferenceImage, mReferenceImage, Imgproc.COLOR_BGR2RGBA);
// Store the reference image's corner coordinates, in pixels.
mReferenceCorners.put(0, 0, new double[]{0.0, 0.0});
mReferenceCorners.put(1, 0, new double[]{referenceImageGray.cols(), 0.0});
mReferenceCorners.put(2, 0, new double[]{referenceImageGray.cols(), referenceImageGray.rows()});
mReferenceCorners.put(3, 0, new double[]{0.0, referenceImageGray.rows()});
// Detect the reference features and compute their
// descriptors.
mFeatureDetector.detect(referenceImageGray, mReferenceKeypoints);
mDescriptorExtractor.compute(referenceImageGray, mReferenceKeypoints, mReferenceDescriptors);
}
@Override
public void apply(final Mat src, final Mat dst) {
// Convert the scene to grayscale.
Imgproc.cvtColor(src, mGraySrc, Imgproc.COLOR_RGBA2GRAY);
// Detect the scene features, compute their descriptors,
// and match the scene descriptors to reference descriptors.
mFeatureDetector.detect(mGraySrc, mSceneKeypoints);
mDescriptorExtractor.compute(mGraySrc, mSceneKeypoints, mSceneDescriptors);
mDescriptorMatcher.match(mSceneDescriptors, mReferenceDescriptors, mMatches);
// Attempt to find the target image's corners in the scene.
findSceneCorners();
// If the corners have been found, draw an outline around the
// target image.
// Else, draw a thumbnail of the target image.
draw(src, dst);
}
private void findSceneCorners() {
final List<DMatch> matchesList = mMatches.toList();
if (matchesList.size() < 4) {
// There are too few matches to find the homography.
return;
}
final List<KeyPoint> referenceKeypointsList = mReferenceKeypoints.toList();
final List<KeyPoint> sceneKeypointsList = mSceneKeypoints.toList();
// Calculate the max and min distances between keypoints.
double maxDist = 0.0;
double minDist = Double.MAX_VALUE;
for (final DMatch match : matchesList) {
final double dist = match.distance;
if (dist < minDist) {
minDist = dist;
}
if (dist > maxDist) {
maxDist = dist;
}
}
// The thresholds for minDist are chosen subjectively
// based on testing. The unit is not related to pixel
// distances; it is related to the number of failed tests
// for similarity between the matched descriptors.
if (minDist > 50.0) {
// The target is completely lost.
// Discard any previously found corners.
mSceneCorners.create(0, 0, mSceneCorners.type());
return;
} else if (minDist > 25.0) {
// The target is lost but maybe it is still close.
// Keep any previously found corners.
return;
}
// Identify "good" keypoints based on match distance.
final ArrayList<Point> goodReferencePointsList = new ArrayList<Point>();
final ArrayList<Point> goodScenePointsList = new ArrayList<Point>();
final double maxGoodMatchDist = 1.75 * minDist;
for (final DMatch match : matchesList) {
if (match.distance < maxGoodMatchDist) {
goodReferencePointsList.add(referenceKeypointsList.get(match.trainIdx).pt);
goodScenePointsList.add(sceneKeypointsList.get(match.queryIdx).pt);
}
}
if (goodReferencePointsList.size() < 4 || goodScenePointsList.size() < 4) {
// There are too few good points to find the homography.
return;
}
// There are enough good points to find the homography.
// (Otherwise, the method would have already returned.)
// Convert the matched points to MatOfPoint2f format, as
// required by the Calib3d.findHomography function.
final MatOfPoint2f goodReferencePoints = new MatOfPoint2f();
goodReferencePoints.fromList(goodReferencePointsList);
final MatOfPoint2f goodScenePoints = new MatOfPoint2f();
goodScenePoints.fromList(goodScenePointsList);
// Find the homography.
final Mat homography = Calib3d.findHomography(goodReferencePoints, goodScenePoints);
// Use the homography to project the reference corner
// coordinates into scene coordinates.
Core.perspectiveTransform(mReferenceCorners, mCandidateSceneCorners, homography);
// Convert the scene corners to integer format, as required
// by the Imgproc.isContourConvex function.
mCandidateSceneCorners.convertTo(mIntSceneCorners, CvType.CV_32S);
// Check whether the corners form a convex polygon. If not,
// (that is, if the corners form a concave polygon), the
// detection result is invalid because no real perspective can
// make the corners of a rectangular image look like a concave
// polygon!
if (Imgproc.isContourConvex(mIntSceneCorners)) {
// The corners form a convex polygon, so record them as
// valid scene corners.
mCandidateSceneCorners.copyTo(mSceneCorners);
}
}
protected void draw(final Mat src, final Mat dst) {
if (dst != src) {
src.copyTo(dst);
}
if (mSceneCorners.height() < 4) {
// The target has not been found.
// Draw a thumbnail of the target in the upper-left
// corner so that the user knows what it is.
// Compute the thumbnail's larger dimension as half the
// video frame's smaller dimension.
int height = mReferenceImage.height();
int width = mReferenceImage.width();
final int maxDimension = Math.min(dst.width(), dst.height()) / 2;
final double aspectRatio = width / (double) height;
if (height > width) {
height = maxDimension;
width = (int) (height * aspectRatio);
} else {
width = maxDimension;
height = (int) (width / aspectRatio);
}
// Select the region of interest (ROI) where the thumbnail
// will be drawn.
final Mat dstROI = dst.submat(0, height, 0, width);
// Copy a resized reference image into the ROI.
Imgproc.resize(mReferenceImage, dstROI, dstROI.size(), 0.0, 0.0, Imgproc.INTER_AREA);
return;
}
// Outline the found target in green.
Imgproc.line(dst, new Point(mSceneCorners.get(0, 0)), new Point(mSceneCorners.get(1, 0)), mLineColor, 4);
Imgproc.line(dst, new Point(mSceneCorners.get(1, 0)), new Point(mSceneCorners.get(2, 0)), mLineColor, 4);
Imgproc.line(dst, new Point(mSceneCorners.get(2, 0)), new Point(mSceneCorners.get(3, 0)), mLineColor, 4);
Imgproc.line(dst, new Point(mSceneCorners.get(3, 0)), new Point(mSceneCorners.get(0, 0)), mLineColor, 4);
}
}
實(shí)現(xiàn)效果圖
確認(rèn)允許權(quán)限:
實(shí)時(shí)追蹤指定圖像
結(jié)語
本文只實(shí)現(xiàn)了需要提供完整原圖進(jìn)行比對(duì)才能實(shí)現(xiàn)圖像識(shí)別,還有許多更加智能方便的識(shí)別技術(shù)和方法,比如:HOG、SIFT、SURF 等方法經(jīng)由正負(fù)樣本庫進(jìn)行訓(xùn)練后可以從圖像中提取一些特征,并通過特征確定物體類別。OpenCV庫中也仍有很大一部分的功能在本文中未能進(jìn)行實(shí)踐,亟待今后繼續(xù)探索和研究。更多Python知識(shí)請(qǐng)關(guān)注我分享更多!
本文轉(zhuǎn)載于:Android開發(fā)-基于OpenCV實(shí)現(xiàn)相機(jī)實(shí)時(shí)圖像識(shí)別跟蹤_Sueed-CSDN博客
總結(jié)
以上是生活随笔為你收集整理的android opencv 获取小图在大图的坐标_Android开发—基于OpenCV实现相机实时图像识别跟踪...的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 量子计算机中的虫洞
- 下一篇: Android自动伸展动画,androi