日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪(fǎng)問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

AnswerOpenCV(1001-1007)一周佳作欣赏

發(fā)布時(shí)間:2023/12/13 编程问答 35 豆豆
生活随笔 收集整理的這篇文章主要介紹了 AnswerOpenCV(1001-1007)一周佳作欣赏 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
外國(guó)不過(guò)十一,所以利用十一假期,看看他們都在干什么。一、小白問(wèn)題http://answers.opencv.org/question/199987/contour-single-blob-with-multiple-object/

Contour Single blob with multiple object

Hi to everyone.

I'm developing an object shape identification application and struck up with separating close objects using contour, Since close objects are identified as single contour. Is there way to separate the objects?

Things I have tried:1. I have tried Image segmentation with distance transform and Watershed algorithm - It works for few images only2. I have tried to separate the objects manual using the distance between two points as mentioned in http://answers.opencv.org/question/71... - I struck up with choosing the points that will separate the object.

I have attached a sample contour for the reference.

Please suggest any comments to separate the objects.

分析:這個(gè)問(wèn)題其實(shí)在閾值處理之前就出現(xiàn)了,我們常見(jiàn)的想法是對(duì)圖像進(jìn)行預(yù)處理,比如HSV 分割,或者在閾值處理的時(shí)候想一些方法。


二、性能優(yōu)化http://answers.opencv.org/question/109754/optimizing-splitmerge-for-clahe/

Optimizing split/merge for clahe

I am trying to squeeze the last ms from a tracking loop. One of the time consuminig parts is doing adaptive contrast enhancement (clahe), which is a necessary part. The results are great, but I am wondering whether I could avoid some copying/splitting/merge or apply other optimizations.

Basically I do the following in tight loop:

cv::cvtColor(rgb, hsv, cv::COLOR_BGR2HSV);
?
std::vector<cv::Mat> hsvChannels;
?
cv::split(hsv, hsvChannels);
?
m_clahe->apply(hsvChannels[2], hsvChannels[2]); /* m_clahe constructed outside loop */
?
cv::merge(hsvChannels, hsvOut);
?
cv::cvtColor(hsvOut, rgbOut, cv::COLOR_HSV2BGR);

On the test machine, the above snippet takes about 8ms (on 1Mpix images), The actual clahe part takes only 1-2 ms.

1 answer

You can save quite a bit. First, get rid of the vector. Then, outside the loop, create a Mat for the V channel only.

Then use extractChannel and insertChannel to access the channel you're using. It only accesses the one channel, instead of all three like split does.

The reason you put the Mat outside the loop is to avoid reallocating it every pass through the loop. Right now you're allocating and deallocating three Mats every pass.

test code:

#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include <iostream>
?
using namespace std;
using namespace cv;
?
int main(){
?
TickMeter tm;
Ptr<CLAHE> clahe = createCLAHE();
??? clahe->setClipLimit(4);
??? vector?<Mat> hsvChannels;
? ? Mat img, hsv1, hsv2, hsvChannels2, diff;
??? img?= imread("lena.jpg");
??? cvtColor?(img, hsv1, COLOR_BGR2HSV);
??? cvtColor?(img, hsv2, COLOR_BGR2HSV);
??? tm.start();
for (int i = 0; i < 1000; i++)
{
??????? split(hsv2, hsvChannels);
??????? clahe->apply(hsvChannels[2], hsvChannels[2]);
??????? merge(hsvChannels, hsv2);
}
??? tm.stop();
??? cout<< tm << endl;
??? tm.reset();
??? tm.start();
?
for (int i = 0; i < 1000; i++)?
{
??????? extractChannel(hsv1, hsvChannels2, 2);
??????? clahe->apply(hsvChannels2, hsvChannels2);
??????? insertChannel(hsvChannels2, hsv1, 2);
}
??? tm.stop();
??? cout<< tm;
??? absdiff(hsv1, hsv2, diff);
??? imshow("diff", diff*255);
??? waitKey();
}
我運(yùn)行這段代碼的結(jié)果為:4.63716sec3.80283sec應(yīng)該說(shuō)其中關(guān)鍵的一句就是使用:extractChannel(hsv1,?hsvChannels2,?2);代替split(hsv2,?hsvChannels);能夠單句提高1MS左右時(shí)間,而這種費(fèi)時(shí)的方法是我目前經(jīng)常采用的,應(yīng)該說(shuō)這題很有較益。
三、基本算法

Compare two images and highlight the difference

Hi - First I'm a total n00b so please be kind. I'd like to create a target shooting app that allows me to us the camera on my android device to see where I hit the target from shot to shot. The device will be stationary with very little to no movement. My thinking is that I'd access the camera and zoom as needed on the target. Once ready I'd hit a button that would start taking pictures every x seconds. Each picture would be compared to the previous one to see if there was a change - the change being I hit the target. If a change was detected the two imaged would be saved, the device would stop taking picture, the image with the change would be displayed on the device and the spot of change would be highlighted. When I was ready for the next shot, I would hit a button on the device and the process would start over. If I was done shooting, there would be a button to stop.

Any help in getting this project off the ground would be greatly appreciated.

retag flag offensive add a comment

This will be a very basic algorithm just to evaluate your use case. It can be improved a lot.

(i) In your case, the first step is to identify whether there is a change or not between 2 frames. It can be identified by using a simple StandardDeviation measurement. Set a threshold for acceptable difference in deviation.

Mat prevFrame, currentFrame;

for(;;)
{
? ? //Getting a frame from the video capture device.
? ? cap >> currentFrame;

? ? if( prevFrame.data )
? ? {
? ? ? ? ?//Finding the standard deviations of current and previous frame.
? ? ? ? ?Scalar prevStdDev, currentStdDev;
? ? ? ? ?meanStdDev(prevFrame, Scalar(), prevStdDev);
? ? ? ? ?meanStdDev(currentFrame, Scalar(), currentStdDev);

? ? ? ? ? //Decision Making.
? ? ? ? ? if(abs(currentStdDev - prevStdDev) < ACCEPTED_DEVIATION)
? ? ? ? ? {
? ? ? ? ? ? ? ?Save the images and break out of the loop.
? ? ? ? ? } ? ??
? ? }

? ? //To exit from the loop, if there is a keypress event.
? ? if(waitKey(30)>=0)
? ? ? ? break;

? ? //For swapping the previous and current frame.
? ? swap(prevFrame, currentFrame);
}

(ii) The first step will only identify the change in frames. In order to locate the position where the change occured, find the difference between the two saved frames using AbsDiff. Using this difference image mask, find the contours and finally mark the region with a bounding rectangle.

Hope this answers your question.

flag offensive link

這道題難道不是對(duì)absdiff的應(yīng)用嗎?直接absdiff,然后閾值,數(shù)數(shù)就可以了。

四、系統(tǒng)配置

opencv OCRTesseract::create v3.05

I have the version of tesseract 3.05 and opencv3.2 installed and tested. But when I tried the end-to-end-recognition demo code, I discovered that tesseract was not found using OCRTesseract::create and checked the documentation to find that the interface is for v3.02. Is it possible to use it with Tesseract v3.05 ? How?

retag flag offensive

How to create OpenCV binary files from source with tesseract ( Windows )

i tried to explain the steps

Step 1.download https://github.com/DanBloomberg/lepto...

extract it in a dir like "E:/leptonica-1.74.4"

run cmake

where is the source code : E:/leptonica-1.74.4

where to build binaries : E:/leptonica-1.74.4/build

click Configure buttonselect compiler

see "Configuring done"click Generate button and see "Generating done"

Open Visual Studio 2015 >> file >> open "E:\leptonica-1.74.4\build\ALL_BUILD.vcxproj"select release, build ALL BUILD

see "Build: 3 succeeded" and be sure E:\leptonica-master\build\src\Release\leptonica-1.74.4.lib and E:\leptonica-1.74.4\build\bin\Release\leptonica-1.74.4.dll have been created


Step 2.download https://github.com/tesseract-ocr/tess...

extract it in a dir like "E:/tesseract-3.05.01"

create a directory E:\tesseract-3.05.01\Files\leptonica\include

copy *.h from E:\leptonica-master\src into E:\tesseract-3.05.01\Files\leptonica\includecopy *.h from E:\leptonica-master\build\src into E:\tesseract-3.05.01\Files\leptonica\include

run cmake

where is the source code : E:/tesseract-3.05.01

where to build binaries : E:/tesseract-3.05.01/build

click Configure buttonselect compiler

set Leptonica_DIR to E:/leptonica-1.74.4\buildclick Configure button againsee "Configuring done"click Generate button and see "Generating done"

Open Visual Studio 2015 >> file >> open "E:/tesseract-3.05.01\build\ALL_BUILD.vcxproj"build ALL_BUILD

be sure E:\tesseract-3.05.01\build\Release\tesseract305.lib and E:\tesseract-3.05.01\build\bin\Release\tesseract305.dll generated


Step 3.create directory E:\tesseract-3.05.01\include\tesseract

copy all *.h files from

E:\tesseract-3.05.01\api

E:\tesseract-3.05.01\ccmain

E:\tesseract-3.05.01\ccutil

E:\tesseract-3.05.01\ccstruct

to E:/tesseract-3.05.01/include\tesseract

in OpenCV cmake set Tesseract_INCLUDE_DIR : E:/tesseract-3.05.01/include

set tesseract_LIBRARY E:/tesseract-3.05.01/build/Release/tesseract305.lib

set Lept_LIBRARY E:/leptonica-master/build/src/Release/leptonica-1.74.4.lib

when you click Configure button you will see "Tesseract: YES" it means everything is OK

make other settings and generate. Compile ....

flag offensive link 禾路按語(yǔ):OCR問(wèn)題,一直都是圖像處理的經(jīng)典問(wèn)題。那么tesseract是這個(gè)方向的非常經(jīng)典的項(xiàng)目,包括east一起進(jìn)行結(jié)合研究。
五、算法問(wèn)題

Pyramid Blending with Single input and Non-Vertical Boundar

Hi All,

Here is the input image.

Say you do not have the other half of the images. Is it still possible to do with Laplacian pyramid blending?

I tried stuffing the image directly into the algorithm. I put weights as opposite triangles. The result comes out the same as the input.My another guess is splitting the triangles. Do gaussian and Laplacian pyramid on each separately, and then merge them.

But the challenge is how do we apply Laplacian matrix on triangular data. What do we fill on the missing half? I tried 0. It made the boundary very bright.

If pyramid blending is not the best approach for this. What other methods do you recommend me to look into for blending?

Any help is much appreciated!

retag flag offensive

Comments

the answer is YES.what you need is pyrdown the images and lineblend them at each pyramid

jsxyhelu?(26 hours ago)edit

Thank you for your comment. I tried doing that (explained by my 2nd paragraph). The output is the same as the original image. Please note where I want to merge is NOT vertical. So I do not get what you meant by "line blend".

這個(gè)問(wèn)題需要實(shí)現(xiàn)的是mulitband blend,而且實(shí)現(xiàn)的是傾斜過(guò)來(lái)的融合,應(yīng)該說(shuō)很奇怪,不知道在什么環(huán)境下會(huì)有這樣的需求,但是如果作為算法問(wèn)題來(lái)說(shuō)的話(huà),還是很有價(jià)值的。首先需要解決的是傾斜的line bend,值得思考。
六、新玩意

DroidCam with OpenCV

With my previous laptop (Windows7) I was connecting to my phone camera via DroidCam and using videoCapture in OpenCV with Visual Studio, and there was no problem. But now I have a laptop with Windows 10, and when I connect the same way it shows orange screen all the time. Actually DroidCam app in my laptop works fine, it shows the video. However while using OpenCV videoCapture from Visual Studio it shows orange screen.

Thanks in advance

retag flag offensive add a commentDisable laptop webcam from device manager and then restart. Then it works
七、算法研究

OpenCV / C++ - Filling holes

Hello there,

For a personnel projet, I'm trying to detect object and there shadow. These are the result I have for now:Original:?

Object:?

Shadow:?

The external contours of the object are quite good, but as you can see, my object is not full.Same for the shadow.I would like to get full contours, filled, for the object and its shadow, and I don't know how to get better than this (I juste use "dilate" for the moment).Does someone knows a way to obtain a better result please?Regards.

有趣的問(wèn)題,研究看看。



?


??



來(lái)自為知筆記(Wiz)

轉(zhuǎn)載于:https://www.cnblogs.com/jsxyhelu/p/9752650.html

總結(jié)

以上是生活随笔為你收集整理的AnswerOpenCV(1001-1007)一周佳作欣赏的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 91叼嘿视频 | 高h视频在线免费观看 | 丁香六月激情综合 | 国产真人无遮挡作爱免费视频 | 午夜一区二区三区免费 | 国产精品一区二区三区免费 | 欧美日韩在线一区二区 | 777中文字幕 | 一二三四精品 | 中文字幕一区二区三区人妻电影 | 国产精品美女主播 | 色女孩综合 | 激情福利在线 | 最新日韩在线 | 亚洲精品资源 | 综合网五月 | 毛片网页| 理论片91| 清冷学长被爆c躁到高潮失禁 | 五月婷视频| 先锋资源av | 欧美成人日韩 | 一本色道久久综合无码人妻 | 国产在线成人 | 国产成人超碰人人澡人人澡 | 日韩欧美黄色大片 | 日韩丰满少妇无码内射 | 天天av综合 | 波多野结衣黄色片 | 日本久操 | 日韩免费视频一区二区 | 亚洲天堂视频在线播放 | 亚洲欧美在线视频 | av手机免费在线观看 | 少妇被按摩师摸高潮了 | 国产免费黄色录像 | aaaa视频 | 黄色精品一区 | 99久久婷婷国产综合精品草原 | 色人阁av | 黄色在线观看网址 | 黄色一毛片 | 国产精品亚洲二区 | 91影院在线 | 国产美女性生活 | 久久久久亚洲av无码a片 | 日韩性高潮 | 国产亚洲欧美日韩精品 | 亚洲av无码精品色午夜 | 国产ts在线视频 | av白浆| 把高贵美妇调教成玩物 | 日韩在线网| 久青草国产在线 | www.日韩.com| 99一级片| 奇米网一区二区 | 香蕉视频传媒 | 三级无遮挡 | 91人人澡人人爽 | 性激烈视频在线观看 | av在线首页 | 亚洲成人不卡 | 懂色一区二区三区免费观看 | 久久艹精品视频 | 大黄网站在线观看 | 国产探花在线精品一区二区 | 国产精品久久久久久免费播放 | 成人午夜又粗又硬又大 | 一级欧美日韩 | 国产精品久久久久久久久久久久久久久 | 麻豆91视频 | 国产精品一区不卡 | 国产成人精品一区二区无码呦 | 亚洲 国产 日韩 欧美 | 丰满人妻翻云覆雨呻吟视频 | 91高清在线视频 | 视频日韩 | 啪免费视频 | 久久毛片网站 | 受虐m奴xxx在线观看 | 色综合久久久久 | 潘金莲一级淫片免费放动漫 | 自拍偷拍小视频 | 国产chinesehd天美传媒 | 亚洲黄色录像片 | 欧美视频区 | 久久波多野 | 在线观看高清av | 欧美一级黄色片在线观看 | 九色一区 | av漫画在线观看 | 欧美午夜一区二区 | 欧美黑吊大战白妞欧美大片 | 久久香蕉热 | 性国产精品| 黄色一级图片 | 人人妻人人澡人人爽精品日本 | www.av在线播放 |