日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 >

2013cvpr的总结

發(fā)布時(shí)間:2025/7/25 34 豆豆
生活随笔 收集整理的這篇文章主要介紹了 2013cvpr的总结 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

顯著性

Saliency Aggregation: A Data-driven Approach?Long Mai, Yuzhen Niu, Feng Liu?現(xiàn)在還沒有搜到相關(guān)的資料,應(yīng)該是多線索的自適應(yīng)融合來進(jìn)行顯著性檢測(cè)的

PISA: Pixelwise Image Saliency by Aggregating Complementary Appearance Contrast Measures with Spatial Priors?Keyang Shi, Keze Wang, Jiangbo Lu, Liang Lin?這里的兩個(gè)線索看起來都不新,應(yīng)該是集成框架比較好。而且像素級(jí)的,估計(jì)能達(dá)到分割或者matting的效果

Looking Beyond the Image: Unsupervised Learning for Object Saliency and Detection?Parthipan Siva, Chris Russell, Tao Xiang,?Lourdes Agapito?基于學(xué)習(xí)的的顯著性檢測(cè)

Learning video saliency from human gaze using candidate selection?Dmitry Rudoy, Dan Goldman, Eli Shechtman, Lihi Zelnik-Manor?這是一個(gè)做視頻顯著性的,估計(jì)是選擇顯著的視頻目標(biāo)

Hierarchical Saliency Detection?Qiong Yan, Li Xu, Jianping Shi,?Jiaya Jia?Jiaya Jia的學(xué)生也開始做顯著性了,多尺度的方法

Saliency Detection via Graph-Based Manifold Ranking?Chuan Yang, Lihe Zhang, Huchuan Lu, Ming-Hsuan Yang, Xiang Ruan?這個(gè)應(yīng)該是擴(kuò)展了那個(gè)經(jīng)典的 graph based saliency,應(yīng)該是用到了顯著性傳播的技巧

Salient object detection: a discriminative regional feature integration approach?Huaizu Jiang, Jingdong Wang, Zejian Yuan,?Yang Wu, Nanning Zheng?一個(gè)多特征自適應(yīng)融合的顯著性檢測(cè)方法

Submodular Salient Region Detection?Zhuolin Jiang, Larry Davis?又是大牛下面的文章,提法也很新穎,用了submodular。第一作者今年有3篇CVPR文章

圖像分割

Efficient Object Detection and Segmentation for Fine-Grained Recognition?Anelia Angelova, Shenghuo Zhu?這個(gè)文章的賣點(diǎn)應(yīng)該在efficient上面,是一個(gè)高效的算法。

Image Segmentation by Cascaded Region Allglomeration?Zhile Ren,?Gregory Shakhnarovich?看標(biāo)題應(yīng)該是一種新的區(qū)域生長(zhǎng)類似的算法,多層模型的應(yīng)用值得關(guān)注。

Analyzing Semantic Segmentation Using Human-Machine Hybrid CRFs?Roozbeh Mottaghi, Sanja Fidler, Jian Yao, Raquel Urtasun,?Devi Parikh?這個(gè)方法應(yīng)該是把人機(jī)交互放到了條件隨機(jī)場(chǎng)里面,實(shí)際上以前很多文章這么做過,很好奇這篇文章用了什么辦法。這個(gè)研究組中了4篇。

Unsupervised Joint Object Discovery and Segmentation in Internet Images?Michael Rubinstein, Armand Joulin,?Ce Liu, Johannes Kopf?給予互聯(lián)網(wǎng)圖像的無監(jiān)督目標(biāo)檢測(cè)和分割,應(yīng)該是用到了海量數(shù)據(jù)中目標(biāo)會(huì)重復(fù)出現(xiàn)這一基本屬性。

Weakly-Supervised Bi-Clustering for Image Semantic Segmentation?Yang Liu, Jing Liu, Zechao Li, Hanqing Lu?一個(gè)二元聚類問題,感覺應(yīng)該是前景背景分割

Deep Learning Shape Priors for Object Segmentation?Fei Chen, Huimin Yu, Roland Hu, Xunxun Zeng?通過deep learning學(xué)習(xí)形狀模型

SCALPEL: Segmentation CAscades with Localized Priors and Efficient Learning ?David Weiss,?Ben Taskar?Ben Taskar是賓夕法尼亞大學(xué)的教授,他前年還獲得了一個(gè)美國(guó)官方的獎(jiǎng)項(xiàng)

Top-down Segmentation of Non-rigid Visual Objects using Derivative-based Search on Sparse Manifolds
Jacinto Nascimento, Gustavo Carneiro?自上而下的分割,是用到了模型學(xué)習(xí)嗎?

Probabilistic Graphlet Cut: Exploiting Spatial Structure Cue for Weakly Supervised Image Segmentation
Luming Zhang, Mingli Song, Zicheng Liu, Xiao Liu, Jiajun Bu, Chun Chen?現(xiàn)在新名詞越來越多了,弱監(jiān)督的分割,效果應(yīng)該不錯(cuò)。

Graph Transduction Learning with Connectivity Constraints with Application to Multiple Foreground Cosegmentation?Tianyang Ma, Longin Jan Latecki?天普大學(xué)的,基本每年都能見到他的paper

Towards Fast and Accurate Segmentation?Camillo Taylor?這個(gè)應(yīng)該是賓大的CJ Taylor教授,他竟然一個(gè)人寫了一篇

A Principled Deep Random Field Model for Image Segmentation?Pushmeet Kohli, Anton Osokin, Stefanie Jegelka?這個(gè)也是大牛的paper

視頻處理

Video Object Segmentation through Spatially Accurate and Temporally Dense Extraction of Primary Object Regions?Dong Zhang, Omar Javed, Mubarak Shah?ORAL?從視頻中分割出主要目標(biāo)的方法。既然是Oral,應(yīng)該值得好好學(xué)習(xí)。

Fast Rigid Motion Segmentation via Incrementally-Complex Local Models?Fernando Flores-Mangas, Allan Jepson?快速的運(yùn)動(dòng)分割,實(shí)時(shí)性的東西我都比較感興趣。

Multi-Class Video Co-Segmentation with a Generative Multi-Video Model?Wei-Chen Chiu, Mario Fritz?這個(gè)難道是將幾個(gè)視頻放一起進(jìn)行聯(lián)合分割?

Discriminative Segment Annotation in Weakly Labeled Video?Kevin Tang, Rahul Sukthankar, Jay Yagnik,?Li Fei-Fei?ORAL?視頻標(biāo)注,Li Feifei做這個(gè)方向挺長(zhǎng)時(shí)間了,看看這篇oral文章新的idea

Representing Videos using Mid-level Discriminative Patches?Arpit Jain, Abhinav Gupta, Mikel Rodriguez, Larry Davis?新的視頻描述方法,應(yīng)該可以用在視頻分割里面

Video Editing with Temporal, Spatial and Appearance Consistency?Xiaojie Guo, Xiaochun Cao,?Yi Ma?Ma yi的paper,關(guān)于視頻編輯的,里面應(yīng)該也是主要用到了視頻分割的技術(shù)。

Ensemble Video Object Cut in Highly Dynamic Scenes?Xiaobo Ren, Tony Han, Zhihai He?在高度動(dòng)態(tài)的場(chǎng)景中,時(shí)間一致性不好保證,視頻分割應(yīng)該會(huì)變得困難。

Hierarchical Video Representation with Trajectory Binary Partition Tree?Guillem Palou, Philippe Salembier?看題目挺有意思,軌跡的二分樹

Adherent Raindrop Detection and Removal in Video?Shaodi You, Rei Kawakami, Robby Tan, Katsushi Ikeuchi?來自日本的一篇有趣的paper,視頻中的雨點(diǎn)檢測(cè)與消除

跟蹤

Tracking Sports Players with Context-Conditioned Motion Models?Jingchen Liu,?Peter Carr, Robert Collins,?Yanxi Liu?ORAL?Bob Collins的paper,使用運(yùn)動(dòng)模型進(jìn)行運(yùn)動(dòng)員跟蹤的。

Multi-target Tracking by Lagrangian Relaxation to Min-Cost Network Flow?Asad Butt, Robert Collins?ORAL?看來Collins教授已經(jīng)稱霸tracking領(lǐng)域了,直接兩篇oral

Physically Plausible 3D Scene Tracking: The Single Actor Hypothesis?Nikolaos Kyriazis, Antonis Argyros?ORAL?關(guān)于3D場(chǎng)景跟蹤的,一篇oral

Structure Preserving Object Tracking?Lu Zhang, Laurens van der Maaten?ORAL?保持結(jié)構(gòu)的跟蹤,不知道具體指的是哪方面的結(jié)構(gòu),骨架嗎?

Harry Potter's Marauder's Map: Localizing and Tracking Multiple Persons-of-Interest by Nonnegative Discretization?Shoou-I Yu, Yi Yang, Alexander Hauptmann?都扯上哈利波特了,看看吧

Detection- and Trajectory-Level Exclusion in Multiple Object Tracking?Anton Andriyenko, Stefan Roth, Konrad Schindler?這個(gè)應(yīng)該是重在利用軌跡進(jìn)行目標(biāo)的關(guān)聯(lián)上

Robust Real-Time Tracking of Multiple Objects by Volumetric Mass Densities?Horst Possegger, Sabine Sternig, Thomas Mauthner, Peter Roth, Horst Bischof?不知道這個(gè)質(zhì)量密度是什么意思,是不是統(tǒng)計(jì)被跟蹤的目標(biāo)形成的volume中概率密度的總和之類

Learning Compact Binary Codes for Visual Tracking?Xi Li, Chunhua Shen, Anthony Dick, Anton van den Hengel?題目看起來有意思

Part-based Visual Tracking with Online Latent Structural Learning?Rui Yao, Qinfeng Shi, Chunhua Shen, Yanning Zhang, Anton van den Hengel?這個(gè)paper是西工大的吧,基于部件的在線跟蹤

Self-paced learning for long-term tracking?James Supancic III, Deva Ramanan?這個(gè)也有點(diǎn)意思,應(yīng)該是分析長(zhǎng)時(shí)間跟蹤中,模型的更新頻率問題。

Joint Multi-Camera Reconstruction and Multi-Object Tracking in a Global Unified Optimization Framework
Martin Hofmann, Daniel Wolf?利用多相機(jī)做多目標(biāo)跟蹤和場(chǎng)景重建

Least Soft-thresold Squares Tracking?Dong Wang, Huchuan Lu, Ming-Hsuan Yang

Tracking People and Their Objects ?Tobias Baumgartner, Dennis Mitzel, Bastian Leibe?是要跟蹤人和他們攜帶的物品嗎?

Tracking Human Pose by Tracking Symmetric Parts?Varun Ramakrishna, Yaser Sheikh, Takeo Kanade?Kanade教授的paper,利用對(duì)稱性來跟蹤人。

立體視覺

Accurate Localization of 3D Objects from RGB-D Data using Segmentation Hypotheses?Byung-soo Kim, Shili Xu, Silvio Savarese?隨著kinect的普及,RGB-D數(shù)據(jù)越來越受關(guān)注了。

Megastereo: Constructing High-Resolution Stereo Panoramas?Christian Richardt, Yael Pritch, Henning Zimmer, Alexander Sorkine-Hornung?ORAL?創(chuàng)建高分辨率的立體全景圖,應(yīng)該有市場(chǎng)前景

Scene-SIRFS: Intrinsic Scene Properties from a Single RGB-D Image?Jonathan Barron, Jitendra Malik?ORAL

Perceptual Organization and Recognition of Indoor Scenes from RGBD Images?Saurabh Gupta, Pablo Arbelaez, Jitendra Malik?ORAL?連著兩篇J.Malik教授的Oral,都是關(guān)于RGBD圖像的,看來他們現(xiàn)在對(duì)這個(gè)方面很感興趣

A New Perspective on Uncalibrated Photometric Stereo?Thoma Papadhimitri, Paolo Favaro?不用標(biāo)定的,應(yīng)該適合于手持設(shè)備。

In Defense of 3D-Label Stereo?Carl Olsson, Johannes Ulen, Yuri Boykov?大牛的paper,關(guān)注之

Recovering Stereo Pairs from Anaglyphs?Armand Joulin, Sing Bing Kang

Segment-Tree based Cost Aggregation for Stereo Matching?Xing Mei, Xun Sun, Weiming Dong, Xiaopeng ZHANG?基于分割樹的立體匹配

其他

Integrating Grammar and Segmentation for Human Pose Estimation?Brandon Rothrock, Seyoung Park,?Song Chun Zhu?做姿態(tài)估計(jì)的,我自己沒做過這方面,不過很想了解一下。

Watching Unlabeled Video Helps Learn New Human Actions from Very Few Labeled Snapshots?Chao-Yeh Chen,Kristen Grauman?ORAL?paper的題目很有意思,美女教授的Oral,關(guān)注一下

Context-Aware Modeling and Recognition of Activities in Video?Amit Roy-Chowdhury, YINGYING ZHU?ORAL?和行為識(shí)別有關(guān)系的,用到了上下文信息。

Recognize Human Activities from Partially Observed Videos?Yu Cao, Daniel Barrett, Andrei Barbu, Siddharth Narayanaswamy, Haonan Yu, Aaron Michaux, Yuewei Lin, Sven Dickinson, Jeffrey Siskind, Song Wang?關(guān)注這篇paper主要是因?yàn)榈谝淮慰吹紺VPR的論文有這么多作者。(10個(gè)作者!

Large Displacement Optical Flow from Nearest Neighbor Fields?Zhuoyuan Chen, Hailin Jin, Zhe Lin, Scott Cohen,Ying Wu?wu ying 提了新的LDOF,不知道會(huì)不會(huì)比Brox的快

Better exploiting motion for better action recognition?Mihir Jain, Herve Jegou, Patrick Bouthemy?名字起的有吸引力,關(guān)注一下

Motionlets: Mid-Level 3D Parts for Human Motion Recognition?LiMin Wang, Qiao Yu,?Xiaoou Tang?中層的3D部件

Motion Estimation for Self-Driving Cars With a Generalized Camera?Gim Hee Lee, Friedrich Fraundorfer,?marc pollefeys?基于無人駕駛汽車的視覺運(yùn)動(dòng)估計(jì),這個(gè)我很感興趣。

Deformable Spatial Pyramid Matching for Fast Dense Correspondences?Jaechul Kim, Ce Liu, Fei Sha,?Kristen Grauman?稠密匹配的,Ce Liu 和 Grauman合作的

Pose from Flow and Flow from Pose?Katerina Fragkiadaki, Han Hu,?jianbo shi?以前一起合作過的,Jianbo Shi老師的學(xué)生

Correlation Filters for Improved Object Alignment?Vishnu Naresh Boddeti, Takeo Kanade, Vijayakumar Bhagavatula?Kanade教授的paper,目標(biāo)對(duì)齊

Articulated Pose Estimation using Discriminative Armlet Classifiers?Georgia Gkioxari, Pablo Arbelaez, Lubomir Bourdev, Jitendra Malik

總結(jié)

以上是生活随笔為你收集整理的2013cvpr的总结的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。