日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪(fǎng)問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

今日arXiv精选 | 35篇顶会论文:ICCV/ CIKM/ ACM MM

發(fā)布時(shí)間:2024/10/8 编程问答 34 豆豆
生活随笔 收集整理的這篇文章主要介紹了 今日arXiv精选 | 35篇顶会论文:ICCV/ CIKM/ ACM MM 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

?關(guān)于?#今日arXiv精選?

這是「AI 學(xué)術(shù)前沿」旗下的一檔欄目,編輯將每日從arXiv中精選高質(zhì)量論文,推送給讀者。

TSI: an Ad Text Strength Indicator using Text-to-CTR and Semantic-Ad-Similarity

Comment: Accepted for publication at CIKM 2021

Link:?http://arxiv.org/abs/2108.08226

Abstract

Coming up with effective ad text is a time consuming process, andparticularly challenging for small businesses with limited advertisingexperience. When an inexperienced advertiser onboards with a poorly written adtext, the ad platform has the opportunity to detect low performing ad text, andprovide improvement suggestions. To realize this opportunity, we propose an adtext strength indicator (TSI) which: (i) predicts the click-through-rate (CTR)for an input ad text, (ii) fetches similar existing ads to create aneighborhood around the input ad, (iii) and compares the predicted CTRs in theneighborhood to declare whether the input ad is strong or weak. In addition, assuggestions for ad text improvement, TSI shows anonymized versions of superiorads (higher predicted CTR) in the neighborhood. For (i), we propose a BERTbased text-to-CTR model trained on impressions and clicks associated with an adtext. For (ii), we propose a sentence-BERT based semantic-ad-similarity modeltrained using weak labels from ad campaign setup data. Offline experimentsdemonstrate that our BERT based text-to-CTR model achieves a significant liftin CTR prediction AUC for cold start (new) advertisers compared to bag-of-wordsbased baselines. In addition, our semantic-textual-similarity model for similarads retrieval achieves a precision@1 of 0.93 (for retrieving ads from the sameproduct category); this is significantly higher compared to unsupervisedTF-IDF, word2vec, and sentence-BERT baselines. Finally, we share promisingonline results from advertisers in the Yahoo (Verizon Media) ad platform wherea variant of TSI was implemented with sub-second end-to-end latency.

Learning Implicit User Profiles for Personalized Retrieval-Based Chatbot

Comment: Accepted by CIKM 2021,?

Code:?https://github.com/qhjqhj00/CIKM2021-IMPChat

Link:?http://arxiv.org/abs/2108.07935

Abstract

In this paper, we explore the problem of developing personalized chatbots. Apersonalized chatbot is designed as a digital chatting assistant for a user.The key characteristic of a personalized chatbot is that it should have aconsistent personality with the corresponding user. It can talk the same way asthe user when it is delegated to respond to others' messages. We present aretrieval-based personalized chatbot model, namely IMPChat, to learn animplicit user profile from the user's dialogue history. We argue that theimplicit user profile is superior to the explicit user profile regardingaccessibility and flexibility. IMPChat aims to learn an implicit user profilethrough modeling user's personalized language style and personalizedpreferences separately. To learn a user's personalized language style, weelaborately build language models from shallow to deep using the user'shistorical responses; To model a user's personalized preferences, we explorethe conditional relations underneath each post-response pair of the user. Thepersonalized preferences are dynamic and context-aware: we assign higherweights to those historical pairs that are topically related to the currentquery when aggregating the personalized preferences. We match each responsecandidate with the personalized language style and personalized preference,respectively, and fuse the two matching signals to determine the final rankingscore. Comprehensive experiments on two large datasets show that our methodoutperforms all baseline models.

Pixel-Perfect Structure-from-Motion with Featuremetric Refinement

Comment: Accepted to ICCV 2021 for oral presentation

Link:?http://arxiv.org/abs/2108.08291

Abstract

Finding local features that are repeatable across multiple views is acornerstone of sparse 3D reconstruction. The classical image matching paradigmdetects keypoints per-image once and for all, which can yield poorly-localizedfeatures and propagate large errors to the final geometry. In this paper, werefine two key steps of structure-from-motion by a direct alignment oflow-level image information from multiple views: we first adjust the initialkeypoint locations prior to any geometric estimation, and subsequently refinepoints and camera poses as a post-processing. This refinement is robust tolarge detection noise and appearance changes, as it optimizes a featuremetricerror based on dense features predicted by a neural network. This significantlyimproves the accuracy of camera poses and scene geometry for a wide range ofkeypoint detectors, challenging viewing conditions, and off-the-shelf deepfeatures. Our system easily scales to large image collections, enablingpixel-perfect crowd-sourced localization at scale. Our code is publiclyavailable at https://github.com/cvg/pixel-perfect-sfm as an add-on to thepopular SfM software COLMAP.

Deep Reparametrization of Multi-Frame Super-Resolution and Denoising

Comment: ICCV 2021 Oral

Link:?http://arxiv.org/abs/2108.08286

Abstract

We propose a deep reparametrization of the maximum a posteriori formulationcommonly employed in multi-frame image restoration tasks. Our approach isderived by introducing a learned error metric and a latent representation ofthe target image, which transforms the MAP objective to a deep feature space.The deep reparametrization allows us to directly model the image formationprocess in the latent space, and to integrate learned image priors into theprediction. Our approach thereby leverages the advantages of deep learning,while also benefiting from the principled multi-frame fusion provided by theclassical MAP formulation. We validate our approach through comprehensiveexperiments on burst denoising and burst super-resolution datasets. Ourapproach sets a new state-of-the-art for both tasks, demonstrating thegenerality and effectiveness of the proposed formulation.

Stochastic Scene-Aware Motion Prediction

Comment: ICCV2021

Link:?http://arxiv.org/abs/2108.08284

Abstract

A long-standing goal in computer vision is to capture, model, andrealistically synthesize human behavior. Specifically, by learning from data,our goal is to enable virtual humans to navigate within cluttered indoor scenesand naturally interact with objects. Such embodied behavior has applications invirtual reality, computer games, and robotics, while synthesized behavior canbe used as a source of training data. This is challenging because real humanmotion is diverse and adapts to the scene. For example, a person can sit or lieon a sofa in many places and with varying styles. It is necessary to model thisdiversity when synthesizing virtual humans that realistically performhuman-scene interactions. We present a novel data-driven, stochastic motionsynthesis method that models different styles of performing a given action witha target object. Our method, called SAMP, for Scene-Aware Motion Prediction,generalizes to target objects of various geometries while enabling thecharacter to navigate in cluttered scenes. To train our method, we collectedMoCap data covering various sitting, lying down, walking, and running styles.We demonstrate our method on complex indoor scenes and achieve superiorperformance compared to existing solutions. Our code and data are available forresearch at https://samp.is.tue.mpg.de.

End-to-End Urban Driving by Imitating a Reinforcement Learning Coach

Comment: ICCV 2021

Link:?http://arxiv.org/abs/2108.08265

Abstract

End-to-end approaches to autonomous driving commonly rely on expertdemonstrations. Although humans are good drivers, they are not good coaches forend-to-end algorithms that demand dense on-policy supervision. On the contrary,automated experts that leverage privileged information can efficiently generatelarge scale on-policy and off-policy demonstrations. However, existingautomated experts for urban driving make heavy use of hand-crafted rules andperform suboptimally even on driving simulators, where ground-truth informationis available. To address these issues, we train a reinforcement learning expertthat maps bird's-eye view images to continuous low-level actions. While settinga new performance upper-bound on CARLA, our expert is also a better coach thatprovides informative supervision signals for imitation learning agents to learnfrom. Supervised by our reinforcement learning coach, a baseline end-to-endagent with monocular camera-input achieves expert-level performance. Ourend-to-end agent achieves a 78% success rate while generalizing to a new townand new weather on the NoCrash-dense benchmark and state-of-the-art performanceon the more challenging CARLA LeaderBoard.

Towards Robust Human Trajectory Prediction in Raw Videos

Comment: 8 pages, 6 figures. Accepted by the 2021 IEEE/RSJ International ?Conference on Intelligent Robots and Systems (IROS 2021)

Link:?http://arxiv.org/abs/2108.08259

Abstract

Human trajectory prediction has received increased attention lately due toits importance in applications such as autonomous vehicles and indoor robots.However, most existing methods make predictions based on human-labeledtrajectories and ignore the errors and noises in detection and tracking. Inthis paper, we study the problem of human trajectory forecasting in raw videos,and show that the prediction accuracy can be severely affected by various typesof tracking errors. Accordingly, we propose a simple yet effective strategy tocorrect the tracking failures by enforcing prediction consistency over time.The proposed "re-tracking" algorithm can be applied to any existing trackingand prediction pipelines. Experiments on public benchmark datasets demonstratethat the proposed method can improve both tracking and prediction performancein challenging real-world scenarios. The code and data are available athttps://git.io/retracking-prediction.

LIGA-Stereo: Learning LiDAR Geometry Aware Representations for Stereo-based 3D Detector

Comment: ICCV'21

Link:?http://arxiv.org/abs/2108.08258

Abstract

Stereo-based 3D detection aims at detecting 3D object bounding boxes fromstereo images using intermediate depth maps or implicit 3D geometryrepresentations, which provides a low-cost solution for 3D perception. However,its performance is still inferior compared with LiDAR-based detectionalgorithms. To detect and localize accurate 3D bounding boxes, LiDAR-basedmodels can encode accurate object boundaries and surface normal directions fromLiDAR point clouds. However, the detection results of stereo-based detectorsare easily affected by the erroneous depth features due to the limitation ofstereo matching. To solve the problem, we propose LIGA-Stereo (LiDAR GeometryAware Stereo Detector) to learn stereo-based 3D detectors under the guidance ofhigh-level geometry-aware representations of LiDAR-based detection models. Inaddition, we found existing voxel-based stereo detectors failed to learnsemantic features effectively from indirect 3D supervisions. We attach anauxiliary 2D detection head to provide direct 2D semantic supervisions.Experiment results show that the above two strategies improved the geometricand semantic representation capabilities. Compared with the state-of-the-artstereo detector, our method has improved the 3D detection performance of cars,pedestrians, cyclists by 10.44%, 5.69%, 5.97% mAP respectively on the officialKITTI benchmark. The gap between stereo-based and LiDAR-based 3D detectors isfurther narrowed.

LOKI: Long Term and Key Intentions for Trajectory Prediction

Comment: ICCV 2021 (The dataset is available at https://usa.honda-ri.com/loki)

Link:?http://arxiv.org/abs/2108.08236

Abstract

Recent advances in trajectory prediction have shown that explicit reasoningabout agents' intent is important to accurately forecast their motion. However,the current research activities are not directly applicable to intelligent andsafety critical systems. This is mainly because very few public datasets areavailable, and they only consider pedestrian-specific intents for a shorttemporal horizon from a restricted egocentric view. To this end, we proposeLOKI (LOng term and Key Intentions), a novel large-scale dataset that isdesigned to tackle joint trajectory and intention prediction for heterogeneoustraffic agents (pedestrians and vehicles) in an autonomous driving setting. TheLOKI dataset is created to discover several factors that may affect intention,including i) agent's own will, ii) social interactions, iii) environmentalconstraints, and iv) contextual information. We also propose a model thatjointly performs trajectory and intention prediction, showing that recurrentlyreasoning about intention can assist with trajectory prediction. We show ourmethod outperforms state-of-the-art trajectory prediction methods by upto$27\%$ and also provide a baseline for frame-wise intention estimation.

MBRS : Enhancing Robustness of DNN-based Watermarking by Mini-Batch of Real and Simulated JPEG Compression

Comment: 9 pages, 6 figures, received by ACM MM'21

Link:?http://arxiv.org/abs/2108.08211

Abstract

Based on the powerful feature extraction ability of deep learningarchitecture, recently, deep-learning based watermarking algorithms have beenwidely studied. The basic framework of such algorithm is the auto-encoder likeend-to-end architecture with an encoder, a noise layer and a decoder. The keyto guarantee robustness is the adversarial training with the differential noiselayer. However, we found that none of the existing framework can well ensurethe robustness against JPEG compression, which is non-differential but is anessential and important image processing operation. To address suchlimitations, we proposed a novel end-to-end training architecture, whichutilizes Mini-Batch of Real and Simulated JPEG compression (MBRS) to enhancethe JPEG robustness. Precisely, for different mini-batches, we randomly chooseone of real JPEG, simulated JPEG and noise-free layer as the noise layer.Besides, we suggest to utilize the Squeeze-and-Excitation blocks which canlearn better feature in embedding and extracting stage, and propose a "messageprocessor" to expand the message in a more appreciate way. Meanwhile, toimprove the robustness against crop attack, we propose an additive diffusionblock into the network. The extensive experimental results have demonstratedthe superior performance of the proposed scheme compared with thestate-of-the-art algorithms. Under the JPEG compression with quality factorQ=50, our models achieve a bit error rate less than 0.01% for extractedmessages, with PSNR larger than 36 for the encoded images, which shows thewell-enhanced robustness against JPEG attack. Besides, under many otherdistortions such as Gaussian filter, crop, cropout and dropout, the proposedframework also obtains strong robustness. The code implemented by PyTorch\cite{2011torch7} is avaiable in https://github.com/jzyustc/MBRS.

Overfitting the Data: Compact Neural Video Delivery via Content-aware Feature Modulation

Comment: Accepted by ICCV 2021

Link:?http://arxiv.org/abs/2108.08202

Abstract

Internet video delivery has undergone a tremendous explosion of growth overthe past few years. However, the quality of video delivery system greatlydepends on the Internet bandwidth. Deep Neural Networks (DNNs) are utilized toimprove the quality of video delivery recently. These methods divide a videointo chunks, and stream LR video chunks and corresponding content-aware modelsto the client. The client runs the inference of models to super-resolve the LRchunks. Consequently, a large number of models are streamed in order to delivera video. In this paper, we first carefully study the relation between models ofdifferent chunks, then we tactfully design a joint training framework alongwith the Content-aware Feature Modulation (CaFM) layer to compress these modelsfor neural video delivery. {\bf With our method, each video chunk only requiresless than $1\% $ of original parameters to be streamed, achieving even betterSR performance.} We conduct extensive experiments across various SR backbones,video time length, and scaling factors to demonstrate the advantages of ourmethod. Besides, our method can be also viewed as a new approach of videocoding. Our primary experiments achieve better video quality compared with thecommercial H.264 and H.265 standard under the same storage cost, showing thegreat potential of the proposed method. Code is availableat:\url{https://github.com/Neural-video-delivery/CaFM-Pytorch-ICCV2021}

Masked Face Recognition Challenge: The InsightFace Track Report

Comment: The WebFace260M Track of ICCV-21 MFR Challenge is still open in ?https://github.com/deepinsight/insightface/tree/master/challenges/iccv21-mfr

Link:?http://arxiv.org/abs/2108.08191

Abstract

During the COVID-19 coronavirus epidemic, almost everyone wears a facialmask, which poses a huge challenge to deep face recognition. In this workshop,we organize Masked Face Recognition (MFR) challenge and focus on bench-markingdeep face recognition methods under the existence of facial masks. In the MFRchallenge, there are two main tracks: the InsightFace track and the WebFace260Mtrack. For the InsightFace track, we manually collect a large-scale masked facetest set with 7K identities. In addition, we also collect a children test setincluding 14K identities and a multi-racial test set containing 242Kidentities. By using these three test sets, we build up an online model testingsystem, which can give a comprehensive evaluation of face recognition models.To avoid data privacy problems, no test image is released to the public. As thechallenge is still under-going, we will keep on updating the top-rankedsolutions as well as this report on the arxiv.

ME-PCN: Point Completion Conditioned on Mask Emptiness

Comment: to appear in ICCV 2021

Link:?http://arxiv.org/abs/2108.08187

Abstract

Point completion refers to completing the missing geometries of an objectfrom incomplete observations. Main-stream methods predict the missing shapes bydecoding a global feature learned from the input point cloud, which often leadsto deficient results in preserving topology consistency and surface details. Inthis work, we present ME-PCN, a point completion network that leverages`emptiness' in 3D shape space. Given a single depth scan, previous methodsoften encode the occupied partial shapes while ignoring the empty regions (e.g.holes) in depth maps. In contrast, we argue that these `emptiness' cluesindicate shape boundaries that can be used to improve topology representationand detail granularity on surfaces. Specifically, our ME-PCN encodes both theoccupied point cloud and the neighboring `empty points'. It estimatescoarse-grained but complete and reasonable surface points in the first stage,followed by a refinement stage to produce fine-grained surface details.Comprehensive experiments verify that our ME-PCN presents better qualitativeand quantitative performance against the state-of-the-art. Besides, we furtherprove that our `emptiness' design is lightweight and easy to embed in existingmethods, which shows consistent effectiveness in improving the CD and EMDscores.

Effect of Parameter Optimization on Classical and Learning-based Image Matching Methods

Comment: 8 pages, 2 figures, 3 tables, ICCV 2021 TradiCV Workshop

Link:?http://arxiv.org/abs/2108.08179

Abstract

Deep learning-based image matching methods are improved significantly duringthe recent years. Although these methods are reported to outperform theclassical techniques, the performance of the classical methods is not examinedin detail. In this study, we compare classical and learning-based methods byemploying mutual nearest neighbor search with ratio test and optimizing theratio test threshold to achieve the best performance on two differentperformance metrics. After a fair comparison, the experimental results onHPatches dataset reveal that the performance gap between classical andlearning-based methods is not that significant. Throughout the experiments, wedemonstrated that SuperGlue is the state-of-the-art technique for the imagematching problem on HPatches dataset. However, if a single parameter, namelyratio test threshold, is carefully optimized, a well-known traditional methodSIFT performs quite close to SuperGlue and even outperforms in terms of meanmatching accuracy (MMA) under 1 and 2 pixel thresholds. Moreover, a recentapproach, DFM, which only uses pre-trained VGG features as descriptors andratio test, is shown to outperform most of the well-trained learning-basedmethods. Therefore, we conclude that the parameters of any classical methodshould be analyzed carefully before comparing against a learning-basedtechnique.

Deployment of Deep Neural Networks for Object Detection on Edge AI Devices with Runtime Optimization

Comment: To present in ICCV 2021 (ERCVAD Workshop)

Link:?http://arxiv.org/abs/2108.08166

Abstract

Deep neural networks have proven increasingly important for automotive sceneunderstanding with new algorithms offering constant improvements of thedetection performance. However, there is little emphasis on experiences andneeds for deployment in embedded environments. We therefore perform a casestudy of the deployment of two representative object detection networks on anedge AI platform. In particular, we consider RetinaNet for image-based 2Dobject detection and PointPillars for LiDAR-based 3D object detection. Wedescribe the modifications necessary to convert the algorithms from a PyTorchtraining environment to the deployment environment taking into account theavailable tools. We evaluate the runtime of the deployed DNN using twodifferent libraries, TensorRT and TorchScript. In our experiments, we observeslight advantages of TensorRT for convolutional layers and TorchScript forfully connected layers. We also study the trade-off between runtime andperformance, when selecting an optimized setup for deployment, and observe thatquantization significantly reduces the runtime while having only little impacton the detection performance.

Generalized and Incremental Few-Shot Learning by Explicit Learning and Calibration without Forgetting

Comment: ICCV 2021

Link:?http://arxiv.org/abs/2108.08165

Abstract

Both generalized and incremental few-shot learning have to deal with threemajor challenges: learning novel classes from only few samples per class,preventing catastrophic forgetting of base classes, and classifier calibrationacross novel and base classes. In this work we propose a three-stage frameworkthat allows to explicitly and effectively address these challenges. While thefirst phase learns base classes with many samples, the second phase learns acalibrated classifier for novel classes from few samples while also preventingcatastrophic forgetting. In the final phase, calibration is achieved across allclasses. We evaluate the proposed framework on four challenging benchmarkdatasets for image and video few-shot classification and obtainstate-of-the-art results for both generalized and incremental few shotlearning.

Specificity-preserving RGB-D Saliency Detection

Comment: Accepted by ICCV 2021

Link:?http://arxiv.org/abs/2108.08162

Abstract

RGB-D saliency detection has attracted increasing attention, due to itseffectiveness and the fact that depth cues can now be conveniently captured.Existing works often focus on learning a shared representation through variousfusion strategies, with few methods explicitly considering how to preservemodality-specific characteristics. In this paper, taking a new perspective, wepropose a specificity-preserving network (SP-Net) for RGB-D saliency detection,which benefits saliency detection performance by exploring both the sharedinformation and modality-specific properties (e.g., specificity). Specifically,two modality-specific networks and a shared learning network are adopted togenerate individual and shared saliency maps. A cross-enhanced integrationmodule (CIM) is proposed to fuse cross-modal features in the shared learningnetwork, which are then propagated to the next layer for integratingcross-level information. Besides, we propose a multi-modal feature aggregation(MFA) module to integrate the modality-specific features from each individualdecoder into the shared decoder, which can provide rich complementarymulti-modal information to boost the saliency detection performance. Further, askip connection is used to combine hierarchical features between the encoderand decoder layers. Experiments on six benchmark datasets demonstrate that ourSP-Net outperforms other state-of-the-art methods. Code is available at:https://github.com/taozh2017/SPNet.

Single-DARTS: Towards Stable Architecture Search

Comment: Accepted by ICCV 2021 NeurArch Workshp

Link:?http://arxiv.org/abs/2108.08128

Abstract

Differentiable architecture search (DARTS) marks a milestone in NeuralArchitecture Search (NAS), boasting simplicity and small search costs. However,DARTS still suffers from frequent performance collapse, which happens when someoperations, such as skip connections, zeroes and poolings, dominate thearchitecture. In this paper, we are the first to point out that the phenomenonis attributed to bi-level optimization. We propose Single-DARTS which merelyuses single-level optimization, updating network weights and architectureparameters simultaneously with the same data batch. Even single-leveloptimization has been previously attempted, no literature provides a systematicexplanation on this essential point. Replacing the bi-level optimization,Single-DARTS obviously alleviates performance collapse as well as enhances thestability of architecture search. Experiment results show that Single-DARTSachieves state-of-the-art performance on mainstream search spaces. Forinstance, on NAS-Benchmark-201, the searched architectures are nearly optimalones. We also validate that the single-level optimization framework is muchmore stable than the bi-level one. We hope that this simple yet effectivemethod will give some insights on differential architecture search. The code isavailable at https://github.com/PencilAndBike/Single-DARTS.git.

Target Adaptive Context Aggregation for Video Scene Graph Generation

Comment: ICCV 2021 camera-ready version

Link:?http://arxiv.org/abs/2108.08121

Abstract

This paper deals with a challenging task of video scene graph generation(VidSGG), which could serve as a structured video representation for high-levelunderstanding tasks. We present a new {\em detect-to-track} paradigm for thistask by decoupling the context modeling for relation prediction from thecomplicated low-level entity tracking. Specifically, we design an efficientmethod for frame-level VidSGG, termed as {\em Target Adaptive ContextAggregation Network} (TRACE), with a focus on capturing spatio-temporal contextinformation for relation recognition. Our TRACE framework streamlines theVidSGG pipeline with a modular design, and presents two unique blocks ofHierarchical Relation Tree (HRTree) construction and Target-adaptive ContextAggregation. More specific, our HRTree first provides an adpative structure fororganizing possible relation candidates efficiently, and guides contextaggregation module to effectively capture spatio-temporal structureinformation. Then, we obtain a contextualized feature representation for eachrelation candidate and build a classification head to recognize its relationcategory. Finally, we provide a simple temporal association strategy to trackTRACE detected results to yield the video-level VidSGG. We perform experimentson two VidSGG benchmarks: ImageNet-VidVRD and Action Genome, and the resultsdemonstrate that our TRACE achieves the state-of-the-art performance. The codeand models are made available at \url{https://github.com/MCG-NJU/TRACE}.

Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision

Comment: Accepted by ICCV 2021

Link:?http://arxiv.org/abs/2108.08119

Abstract

Learning RAW-to-sRGB mapping has drawn increasing attention in recent years,wherein an input raw image is trained to imitate the target sRGB image capturedby another camera. However, the severe color inconsistency makes it verychallenging to generate well-aligned training pairs of input raw and targetsRGB images. While learning with inaccurately aligned supervision is prone tocausing pixel shift and producing blurry results. In this paper, we circumventsuch issue by presenting a joint learning model for image alignment andRAW-to-sRGB mapping. To diminish the effect of color inconsistency in imagealignment, we introduce to use a global color mapping (GCM) module to generatean initial sRGB image given the input raw image, which can keep the spatiallocation of the pixels unchanged, and the target sRGB image is utilized toguide GCM for converting the color towards it. Then a pre-trained optical flowestimation network (e.g., PWC-Net) is deployed to warp the target sRGB image toalign with the GCM output. To alleviate the effect of inaccurately alignedsupervision, the warped target sRGB image is leveraged to learn RAW-to-sRGBmapping. When training is done, the GCM module and optical flow network can bedetached, thereby bringing no extra computation cost for inference. Experimentsshow that our method performs favorably against state-of-the-arts on ZRR andSR-RAW datasets. With our joint learning model, a light-weight backbone canachieve better quantitative and qualitative performance on ZRR dataset. Codesare available at https://github.com/cszhilu1998/RAW-to-sRGB.

Few-Shot Batch Incremental Road Object Detection via Detector Fusion

Comment: accepted in 2nd Autonomous Vehicle Vision Workshop, ICCV2021

Link:?http://arxiv.org/abs/2108.08048

Abstract

Incremental few-shot learning has emerged as a new and challenging area indeep learning, whose objective is to train deep learning models using very fewsamples of new class data, and none of the old class data. In this work wetackle the problem of batch incremental few-shot road object detection usingdata from the India Driving Dataset (IDD). Our approach, DualFusion, combinesobject detectors in a manner that allows us to learn to detect rare objectswith very limited data, all without severely degrading the performance of thedetector on the abundant classes. In the IDD OpenSet incremental few-shotdetection task, we achieve a mAP50 score of 40.0 on the base classes and anoverall mAP50 score of 38.8, both of which are the highest to date. In the COCObatch incremental few-shot detection task, we achieve a novel AP score of 9.9,surpassing the state-of-the-art novel class performance on the same by over 6.6times.

Adaptive Graph Convolution for Point Cloud Analysis

Comment: Camera-ready, to be published in ICCV 2021

Link:?http://arxiv.org/abs/2108.08035

Abstract

Convolution on 3D point clouds that generalized from 2D grid-like domains iswidely researched yet far from perfect. The standard convolution characterisesfeature correspondences indistinguishably among 3D points, presenting anintrinsic limitation of poor distinctive feature learning. In this paper, wepropose Adaptive Graph Convolution (AdaptConv) which generates adaptive kernelsfor points according to their dynamically learned features. Compared with usinga fixed/isotropic kernel, AdaptConv improves the flexibility of point cloudconvolutions, effectively and precisely capturing the diverse relations betweenpoints from different semantic parts. Unlike popular attentional weightschemes, the proposed AdaptConv implements the adaptiveness inside theconvolution operation instead of simply assigning different weights to theneighboring points. Extensive qualitative and quantitative evaluations showthat our method outperforms state-of-the-art point cloud classification andsegmentation approaches on several benchmark datasets. Our code is available athttps://github.com/hrzhou2/AdaptConv-master.

Variational Attention: Propagating Domain-Specific Knowledge for Multi-Domain Learning in Crowd Counting

Comment: ICCV 2021

Link:?http://arxiv.org/abs/2108.08023

Abstract

In crowd counting, due to the problem of laborious labelling, it is perceivedintractability of collecting a new large-scale dataset which has plentifulimages with large diversity in density, scene, etc. Thus, for learning ageneral model, training with data from multiple different datasets might be aremedy and be of great value. In this paper, we resort to the multi-domainjoint learning and propose a simple but effective Domain-specific KnowledgePropagating Network (DKPNet)1 for unbiasedly learning the knowledge frommultiple diverse data domains at the same time. It is mainly achieved byproposing the novel Variational Attention(VA) technique for explicitly modelingthe attention distributions for different domains. And as an extension to VA,Intrinsic Variational Attention(InVA) is proposed to handle the problems ofover-lapped domains and sub-domains. Extensive experiments have been conductedto validate the superiority of our DKPNet over several popular datasets,including ShanghaiTech A/B, UCF-QNRF and NWPU.

Speech Drives Templates: Co-Speech Gesture Synthesis with Learned Templates

Comment: Accepted by ICCV 2021

Link:?http://arxiv.org/abs/2108.08020

Abstract

Co-speech gesture generation is to synthesize a gesture sequence that notonly looks real but also matches with the input speech audio. Our methodgenerates the movements of a complete upper body, including arms, hands, andthe head. Although recent data-driven methods achieve great success, challengesstill exist, such as limited variety, poor fidelity, and lack of objectivemetrics. Motivated by the fact that the speech cannot fully determine thegesture, we design a method that learns a set of gesture template vectors tomodel the latent conditions, which relieve the ambiguity. For our method, thetemplate vector determines the general appearance of a generated gesturesequence, while the speech audio drives subtle movements of the body, bothindispensable for synthesizing a realistic gesture sequence. Due to theintractability of an objective metric for gesture-speech synchronization, weadopt the lip-sync error as a proxy metric to tune and evaluate thesynchronization ability of our model. Extensive experiments show thesuperiority of our method in both objective and subjective evaluations onfidelity and synchronization.

RANK-NOSH: Efficient Predictor-Based Architecture Search via Non-Uniform Successive Halving

Comment: To Appear in ICCV2021.?

Code:?https://github.com/ruocwang

Link:?http://arxiv.org/abs/2108.08019

Abstract

Predictor-based algorithms have achieved remarkable performance in the NeuralArchitecture Search (NAS) tasks. However, these methods suffer from highcomputation costs, as training the performance predictor usually requirestraining and evaluating hundreds of architectures from scratch. Previous worksalong this line mainly focus on reducing the number of architectures requiredto fit the predictor. In this work, we tackle this challenge from a differentperspective - improve search efficiency by cutting down the computation budgetof architecture training. We propose NOn-uniform Successive Halving (NOSH), ahierarchical scheduling algorithm that terminates the training ofunderperforming architectures early to avoid wasting budget. To effectivelyleverage the non-uniform supervision signals produced by NOSH, we formulatepredictor-based architecture search as learning to rank with pairwisecomparisons. The resulting method - RANK-NOSH, reduces the search budget by ~5xwhile achieving competitive or even better performance than previousstate-of-the-art predictor-based methods on various spaces and datasets.

Deep Hybrid Self-Prior for Full 3D Mesh Generation

Comment: Accepted by ICCV2021

Link:?http://arxiv.org/abs/2108.08017

Abstract

We present a deep learning pipeline that leverages network self-prior torecover a full 3D model consisting of both a triangular mesh and a texture mapfrom the colored 3D point cloud. Different from previous methods eitherexploiting 2D self-prior for image editing or 3D self-prior for pure surfacereconstruction, we propose to exploit a novel hybrid 2D-3D self-prior in deepneural networks to significantly improve the geometry quality and produce ahigh-resolution texture map, which is typically missing from the output ofcommodity-level 3D scanners. In particular, we first generate an initial meshusing a 3D convolutional neural network with 3D self-prior, and then encodeboth 3D information and color information in the 2D UV atlas, which is furtherrefined by 2D convolutional neural networks with the self-prior. In this way,both 2D and 3D self-priors are utilized for the mesh and texture recovery.Experiments show that, without the need of any additional training data, ourmethod recovers the 3D textured mesh model of high quality from sparse input,and outperforms the state-of-the-art methods in terms of both the geometry andtexture quality.

Multi-Anchor Active Domain Adaptation for Semantic Segmentation

Comment: ICCV 2021 Oral

Link:?http://arxiv.org/abs/2108.08012

Abstract

Unsupervised domain adaption has proven to be an effective approach foralleviating the intensive workload of manual annotation by aligning thesynthetic source-domain data and the real-world target-domain samples.Unfortunately, mapping the target-domain distribution to the source-domainunconditionally may distort the essential structural information of thetarget-domain data. To this end, we firstly propose to introduce a novelmulti-anchor based active learning strategy to assist domain adaptationregarding the semantic segmentation task. By innovatively adopting multipleanchors instead of a single centroid, the source domain can be bettercharacterized as a multimodal distribution, thus more representative andcomplimentary samples are selected from the target domain. With little workloadto manually annotate these active samples, the distortion of the target-domaindistribution can be effectively alleviated, resulting in a large performancegain. The multi-anchor strategy is additionally employed to model thetarget-distribution. By regularizing the latent representation of the targetsamples compact around multiple anchors through a novel soft alignment loss,more precise segmentation can be achieved. Extensive experiments are conductedon public datasets to demonstrate that the proposed approach outperformsstate-of-the-art methods significantly, along with thorough ablation study toverify the effectiveness of each component.

Structured Outdoor Architecture Reconstruction by Exploration and Classification

Comment: 2021 International Conference on Computer Vision (ICCV 2021)

Link:?http://arxiv.org/abs/2108.07990

Abstract

This paper presents an explore-and-classify framework for structuredarchitectural reconstruction from an aerial image. Starting from a potentiallyimperfect building reconstruction by an existing algorithm, our approach 1)explores the space of building models by modifying the reconstruction viaheuristic actions; 2) learns to classify the correctness of building modelswhile generating classification labels based on the ground-truth, and 3)repeat. At test time, we iterate exploration and classification, seeking for aresult with the best classification score. We evaluate the approach usinginitial reconstructions by two baselines and two state-of-the-artreconstruction algorithms. Qualitative and quantitative evaluations demonstratethat our approach consistently improves the reconstruction quality from everyinitial reconstruction.

A New Journey from SDRTV to HDRTV

Comment: Accepted to ICCV

Link:?http://arxiv.org/abs/2108.07978

Abstract

Nowadays modern displays are capable to render video content with highdynamic range (HDR) and wide color gamut (WCG). However, most availableresources are still in standard dynamic range (SDR). Therefore, there is anurgent demand to transform existing SDR-TV contents into their HDR-TV versions.In this paper, we conduct an analysis of SDRTV-to-HDRTV task by modeling theformation of SDRTV/HDRTV content. Base on the analysis, we propose a three-stepsolution pipeline including adaptive global color mapping, local enhancementand highlight generation. Moreover, the above analysis inspires us to present alightweight network that utilizes global statistics as guidance to conductimage-adaptive color mapping. In addition, we construct a dataset using HDRvideos in HDR10 standard, named HDRTV1K, and select five metrics to evaluatethe results of SDRTV-to-HDRTV algorithms. Furthermore, our final resultsachieve state-of-the-art performance in quantitative comparisons and visualquality. The code and dataset are available athttps://github.com/chxy95/HDRTVNet.

Thermal Image Processing via Physics-Inspired Deep Networks

Comment: Accepted to 2nd ICCV workshop on Learning for Computational Imaging ?(LCI)

Link:?http://arxiv.org/abs/2108.07973

Abstract

We introduce DeepIR, a new thermal image processing framework that combinesphysically accurate sensor modeling with deep network-based imagerepresentation. Our key enabling observations are that the images captured bythermal sensors can be factored into slowly changing, scene-independent sensornon-uniformities (that can be accurately modeled using physics) and ascene-specific radiance flux (that is well-represented using a deepnetwork-based regularizer). DeepIR requires neither training data nor periodicground-truth calibration with a known black body target--making it well suitedfor practical computer vision tasks. We demonstrate the power of going DeepIRby developing new denoising and super-resolution algorithms that exploitmultiple images of the scene captured with camera jitter. Simulated and realdata experiments demonstrate that DeepIR can perform high-qualitynon-uniformity correction with as few as three images, achieving a 10dB PSNRimprovement over competing approaches.

SynFace: Face Recognition with Synthetic Data

Comment: Accepted by ICCV 2021

Link:?http://arxiv.org/abs/2108.07960

Abstract

With the recent success of deep neural networks, remarkable progress has beenachieved on face recognition. However, collecting large-scale real-worldtraining data for face recognition has turned out to be challenging, especiallydue to the label noise and privacy issues. Meanwhile, existing face recognitiondatasets are usually collected from web images, lacking detailed annotations onattributes (e.g., pose and expression), so the influences of differentattributes on face recognition have been poorly investigated. In this paper, weaddress the above-mentioned issues in face recognition using synthetic faceimages, i.e., SynFace. Specifically, we first explore the performance gapbetween recent state-of-the-art face recognition models trained with syntheticand real face images. We then analyze the underlying causes behind theperformance gap, e.g., the poor intra-class variations and the domain gapbetween synthetic and real face images. Inspired by this, we devise the SynFacewith identity mixup (IM) and domain mixup (DM) to mitigate the aboveperformance gap, demonstrating the great potentials of synthetic data for facerecognition. Furthermore, with the controllable face synthesis model, we caneasily manage different factors of synthetic face generation, including pose,expression, illumination, the number of identities, and samples per identity.Therefore, we also perform a systematically empirical analysis on syntheticface images to provide some insights on how to effectively utilize syntheticdata for face recognition.

Self-Supervised Visual Representations Learning by Contrastive Mask Prediction

Comment: Accepted to ICCV 2021

Link:?http://arxiv.org/abs/2108.07954

Abstract

Advanced self-supervised visual representation learning methods rely on theinstance discrimination (ID) pretext task. We point out that the ID task has animplicit semantic consistency (SC) assumption, which may not hold inunconstrained datasets. In this paper, we propose a novel contrastive maskprediction (CMP) task for visual representation learning and design a maskcontrast (MaskCo) framework to implement the idea. MaskCo contrastsregion-level features instead of view-level features, which makes it possibleto identify the positive sample without any assumptions. To solve the domaingap between masked and unmasked features, we design a dedicated mask predictionhead in MaskCo. This module is shown to be the key to the success of the CMP.We evaluated MaskCo on training datasets beyond ImageNet and compare itsperformance with MoCo V2. Results show that MaskCo achieves comparableperformance with MoCo V2 using ImageNet training dataset, but demonstrates astronger performance across a range of downstream tasks when COCO or ConceptualCaptions are used for training. MaskCo provides a promising alternative to theID-based methods for self-supervised learning in the wild.

FACIAL: Synthesizing Dynamic Talking Face with Implicit Attribute Learning

Comment: 10 pages, 9 figures. Accepted by ICCV 2021

Link:?http://arxiv.org/abs/2108.07938

Abstract

In this paper, we propose a talking face generation method that takes anaudio signal as input and a short target video clip as reference, andsynthesizes a photo-realistic video of the target face with natural lipmotions, head poses, and eye blinks that are in-sync with the input audiosignal. We note that the synthetic face attributes include not only explicitones such as lip motions that have high correlations with speech, but alsoimplicit ones such as head poses and eye blinks that have only weak correlationwith the input audio. To model such complicated relationships among differentface attributes with input audio, we propose a FACe Implicit Attribute LearningGenerative Adversarial Network (FACIAL-GAN), which integrates thephonetics-aware, context-aware, and identity-aware information to synthesizethe 3D face animation with realistic motions of lips, head poses, and eyeblinks. Then, our Rendering-to-Video network takes the rendered face images andthe attention map of eye blinks as input to generate the photo-realistic outputvideo frames. Experimental results and user studies show our method cangenerate realistic talking face videos with not only synchronized lip motions,but also natural head movements and eye blinks, with better qualities than theresults of state-of-the-art methods.

Towards Interpreting Zoonotic Potential of Betacoronavirus Sequences With Attention

Comment: 11 pages, 8 figures, 1 table, accepted at ICLR 2021 workshop Machine ?learning for preventing and combating pandemics

Link:?http://arxiv.org/abs/2108.08077

Abstract

Current methods for viral discovery target evolutionarily conserved proteinsthat accurately identify virus families but remain unable to distinguish thezoonotic potential of newly discovered viruses. Here, we apply anattention-enhanced long-short-term memory (LSTM) deep neural net classifier toa highly conserved viral protein target to predict zoonotic potential acrossbetacoronaviruses. The classifier performs with a 94% accuracy. Analysis andvisualization of attention at the sequence and structure-level featuresindicate possible association between important protein-protein interactionsgoverning viral replication in zoonotic betacoronaviruses and zoonotictransmission.

XAI Methods for Neural Time Series Classification: A Brief Review

Comment: 8 pages, 0 figures, Accepted as a poster presentation

Link:?http://arxiv.org/abs/2108.08009

Abstract

Deep learning models have recently demonstrated remarkable results in avariety of tasks, which is why they are being increasingly applied inhigh-stake domains, such as industry, medicine, and finance. Considering thatautomatic predictions in these domains might have a substantial impact on thewell-being of a person, as well as considerable financial and legalconsequences to an individual or a company, all actions and decisions thatresult from applying these models have to be accountable. Given that asubstantial amount of data that is collected in high-stake domains are in theform of time series, in this paper we examine the current state of eXplainableAI (XAI) methods with a focus on approaches for opening up deep learning blackboxes for the task of time series classification. Finally, our contributionalso aims at deriving promising directions for future work, to advance XAI fordeep learning on time series data.

·

總結(jié)

以上是生活随笔為你收集整理的今日arXiv精选 | 35篇顶会论文:ICCV/ CIKM/ ACM MM的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。

亚洲免费a | 天天射综合 | 人人狠狠综合久久亚洲 | 国产精品毛片一区二区 | 精品一二区 | 日韩精品视频在线免费观看 | 日韩欧美高清视频在线观看 | 韩国av一区二区 | 色久av| 91成品视频 | 精品999在线| 国产专区精品视频 | 久久久免费高清视频 | 久久精品99国产精品亚洲最刺激 | 国产电影一区二区三区四区 | 91视频在线| 亚洲成人网在线 | 亚洲成人资源网 | 国产成人精品三级 | 久久精品三级 | 天天插天天色 | 色综合久久久久久中文网 | 久久久久欠精品国产毛片国产毛生 | 免费看在线看www777 | 丁香六月综合网 | 久久一区精品 | 国产成人三级一区二区在线观看一 | 亚洲黄色软件 | 成年人黄色大片在线 | 麻豆国产精品永久免费视频 | se视频网址| 久久免费毛片 | 99久久精品无码一区二区毛片 | 亚洲欧洲中文日韩久久av乱码 | 成人动漫一区二区 | 午夜国产在线观看 | 五月天中文字幕mv在线 | 久草综合视频 | 婷婷.com| 五月开心婷婷 | 日韩午夜电影 | 在线观看免费视频你懂的 | 成年人免费看片 | 九九av | 亚洲性少妇性猛交wwww乱大交 | 激情五月激情综合网 | 麻豆一二三精选视频 | 欧美国产精品久久久久久免费 | 三级黄色a | 亚洲jizzjizz日本少妇 | av不卡网站 | 色多多视频在线观看 | 99国产在线视频 | 九九九免费视频 | 在线黄色av电影 | 欧美视频国产视频 | 国产99久久久国产 | 免费十分钟 | 国产一区 在线播放 | 一区二区三区在线观看免费视频 | 91大神dom调教在线观看 | 狠狠88综合久久久久综合网 | 狠狠色丁香婷婷综合久久片 | 久久久视频在线 | 婷婷色六月天 | 人人澡人人添人人爽一区二区 | 欧美性超爽 | 国产一区二区精品 | 日韩中文字幕a | 日批网站免费观看 | 欧美日韩视频一区二区 | 91黄色免费网站 | 久草在线最新免费 | 国产精品久久久久久久久久久久午夜片 | 天天操天天操天天操天天操天天操 | 五月婷亚洲 | 成年人免费在线 | 91精彩在线视频 | 欧美精品乱码99久久影院 | 91在线看视频免费 | 日韩免费在线一区 | 九九热在线观看 | 久久久久久久久久网 | av免费看电影| 国产做a爱一级久久 | 青青草国产成人99久久 | 最近高清中文字幕 | 国产中文字幕在线视频 | 福利视频一区二区 | 久久香蕉影视 | 波多野结依在线观看 | 国产啊v在线观看 | 日韩欧美一二三 | 99精品国产成人一区二区 | 美女免费网站 | 日本久久片 | 欧美a级在线播放 | 天天色天天射综合网 | 黄色免费高清视频 | 免费在线观看的av网站 | 在线欧美最极品的av | 丁香综合五月 | 欧美日韩国产精品一区 | 99国内精品久久久久久久 | 日本最大色倩网站www | 国产视频1区2区3区 久久夜视频 | 草久在线观看视频 | 久爱综合 | 人人涩 | 日批网站在线观看 | 黄色片网站 | 91色欧美| 日韩高清不卡一区二区三区 | 精品久久1| 四虎影院在线观看av | 欧美日韩一区二区免费在线观看 | 欧美性生活一级片 | 天天操夜夜想 | 国内视频 | 国内成人av | 免费a视频在线观看 | 人人玩人人添人人澡超碰 | 欧美a视频| 91精品国产综合久久婷婷香蕉 | 99r在线| 美女黄频免费 | 91在线91 | 久久久麻豆视频 | 日日夜夜天天干 | 欧美日韩中文字幕在线视频 | 天天操偷偷干 | 日韩视频一区二区三区在线播放免费观看 | 欧美成人手机版 | 91精品对白一区国产伦 | 久久久午夜电影 | 九九热精| 日韩一区二区三区不卡 | 91精品在线免费观看视频 | 欧美日韩3p| 久久人人添人人爽添人人88v | 日韩欧美一级二级 | 日本中文一区二区 | 亚洲一区二区观看 | 国产精品久久久久aaaa | 欧美一区在线看 | 96视频免费在线观看 | www.久久久 | 亚洲精品女 | 看av免费| 久久久精品一区二区 | 97成人免费 | 992tv在线 | 久久综合九色99 | 亚洲a资源 | 日韩在线高清免费视频 | 婷婷综合激情 | 91丨九色丨国产女 | 国产黄色高清 | jizzjizzjizz亚洲| 久久久久女人精品毛片九一 | 国产婷婷久久 | 日韩欧美在线观看一区二区 | 少妇av网| 精品国产亚洲一区二区麻豆 | 免费看片网页 | 伊人久操| av三级在线播放 | 亚洲一级免费观看 | 亚洲精品国产综合久久 | 免费网站色 | av免费电影在线观看 | 福利视频区| 香蕉免费 | 欧美亚洲专区 | 麻豆手机在线 | 黄色软件在线观看 | 亚洲成人家庭影院 | 黄色a级片在线观看 | 五月在线 | 亚洲精品乱码久久久久久久久久 | 国产视频二区三区 | 国产淫片免费看 | 国产美女精品久久久 | 999久久a精品合区久久久 | 午夜久久福利影院 | 97超碰.com | 久久精品99 | 国产精品色在线 | 国产黑丝一区二区三区 | 午夜成人免费影院 | av在线播放一区二区三区 | 91久久久久久久一区二区 | 免费在线播放视频 | 日韩高清在线一区二区 | 午夜久久久久久久久久影院 | 国内精品久久久久影院一蜜桃 | 国内精品久久久久影院优 | 亚洲欧美一区二区三区孕妇写真 | 欧美日韩在线精品一区二区 | 91看成人 | 免费国产亚洲视频 | 一级性生活片 | 91精品国产福利 | 色综合色综合色综合 | 日韩91精品 | 手机av在线网站 | 免费看久久久 | 中文字幕一区二区三区精华液 | 亚av在线 | 久久精品视频免费 | 久久综合干 | 国产专区一| 久久精品一区二区三 | 亚洲在线看 | 日韩在线一级 | 91亚洲国产 | 99久久婷婷国产综合精品 | 精品国产乱码一区二 | 国产精品美女久久久久aⅴ 干干夜夜 | 亚洲精品国偷自产在线99热 | 少妇精69xxtheporn | 91麻豆免费版 | 在线观看成人一级片 | 超碰97久久| 久久精品国产第一区二区三区 | 超碰在线亚洲 | 亚洲三级在线播放 | 天天插天天操天天干 | 亚洲涩涩一区 | 在线播放国产精品 | 中文字幕 国产精品 | 免费看国产一级片 | 国产在线免费av | 色狠狠久久av五月综合 | 玖玖国产精品视频 | 国产高清在线免费视频 | 国产国语在线 | 奇米网网址 | 黄网站www | 九九在线精品视频 | 青青久草在线视频 | 国产一区二区久久久 | 99久久精品久久久久久动态片 | 综合精品久久久 | 2019中文最近的2019中文在线 | 国产精品午夜在线 | 国产在线 一区二区三区 | 国内成人精品2018免费看 | 亚洲无在线 | 久久久www| 国产原厂视频在线观看 | 国产精品一区二区 91 | 九九久久精品 | 国产伦理精品一区二区 | 欧美日韩高清一区 | 天天躁天天狠天天透 | 欧美在线视频第一页 | 亚洲精品一区二区三区高潮 | 欧美极品少妇xxxxⅹ欧美极品少妇xxxx亚洲精品 | 99久久婷婷国产一区二区三区 | 免费看污污视频的网站 | 日韩二区在线 | av高清影院| 亚州天堂 | 日韩大片在线播放 | 国产男女无遮挡猛进猛出在线观看 | 久久精品中文字幕 | 欧美日韩性 | 国产高清久久 | 国产日韩精品一区二区三区在线 | 日本精品视频在线 | 亚洲欧美国产精品 | 成人av片免费观看app下载 | 青青久草在线 | 91黄色小网站 | 麻豆高清免费国产一区 | 91丨九色丨91啦蝌蚪老版 | 色a在线观看 | 伊人网av | 国产视频精品视频 | 97手机电影网 | 天天碰天天操视频 | 国产黄色免费 | 久久视频免费 | 九九九视频在线 | 久章草在线 | se婷婷| 国产一区自拍视频 | 欧洲精品亚洲精品 | 最新久久免费视频 | 久久五月情影视 | 99视频精品全部免费 在线 | 国产 一区二区三区 在线 | 亚洲精品一区二区精华 | 人人爽人人爽人人爽学生一级 | 91av在线电影 | 黄污网| 一级成人免费 | 三级黄色网址 | 国产精品久久久久免费a∨ 欧美一级性生活片 | 一区二区日韩av | 婷婷av在线 | 美女黄久久| 超碰人人av| 在线国产中文 | 91精品视频在线观看免费 | 久久99热这里只有精品国产 | 国产精品久久久999 国产91九色视频 | 亚洲欧美日本一区二区三区 | 国产美女精品 | 久久一本综合 | 国产精品黄色影片导航在线观看 | 亚洲在线视频观看 | 444av| 久草a在线| 91精品国产福利在线观看 | 天堂av免费| 中文字幕日韩伦理 | 国产日韩欧美在线观看 | 玖玖视频免费在线 | 久久呀| 国产精品一区在线观看 | 69视频网站 | 在线看一区二区 | 国产原创av片 | 黄网站污| 国产手机在线观看 | 国产裸体视频网站 | 婷婷色 亚洲 | 天天干天天射天天爽 | 夜色成人网 | 国产精品久久久久aaaa九色 | 99在线精品视频观看 | 高清av免费看 | 国产一区二区三区高清播放 | 国产高清久久久久 | av一区二区三区在线播放 | av免费网站 | 精品久久久久久亚洲综合网站 | 亚洲国产精品久久久久婷婷884 | 国产精品大片免费观看 | 500部大龄熟乱视频 欧美日本三级 | 久久激情综合网 | 最新av免费 | 精品一区二区三区香蕉蜜桃 | 国产在线欧美在线 | 丁香激情五月婷婷 | 国产精品大尺度 | 成人xxxx | 九九免费观看全部免费视频 | 免费色视频网址 | 免费日韩在线 | 99色在线观看视频 | 在线观看国产成人av片 | 欧美午夜剧场 | 中文字幕888 | www最近高清中文国语在线观看 | 久久大片 | 久久爱影视i | 久久久久久网站 | 在线看一区二区 | 欧美污在线观看 | 欧美日韩网站 | 欧美日韩在线播放 | 91精品久久久久 | www.在线观看av | 久久精品国产精品亚洲 | 久久涩视频 | 欧美午夜久久 | 91亚洲网站 | 黄色免费在线视频 | 日本久久久精品视频 | 日韩av中文在线观看 | 深夜视频久久 | 日韩精品一区二区三区免费视频观看 | 久久成人午夜视频 | 69夜色精品国产69乱 | 国产精品美女在线观看 | 国产九色视频在线观看 | 伊人色综合久久天天 | 国产一级片播放 | 天天综合导航 | 婷婷亚洲五月色综合 | 中文字幕第| 91九色国产蝌蚪 | 精品久久久久久亚洲综合网 | 国产精品成人在线观看 | 日韩在线不卡 | 99精品国产视频 | 亚洲综合色视频 | 天天鲁天天干天天射 | 草久在线观看视频 | 日日爽夜夜爽 | 国产一级在线观看 | 菠萝菠萝在线精品视频 | 中文字幕亚洲精品日韩 | 五月婷在线视频 | 国产精品一区免费观看 | 九九视频免费在线观看 | 午夜影院一区 | 国产生活一级片 | 操操日 | 日日夜夜添 | 91精品久久久久久综合乱菊 | 超碰大片 | 日韩中文在线播放 | 国产精品久久嫩一区二区免费 | 久久精品视频在线免费观看 | 69xxxx欧美 | 夜夜躁狠狠躁日日躁 | www久久| 欧美精品乱码久久久久久 | 伊香蕉大综综综合久久啪 | 国产91成人在在线播放 | 亚洲成人网在线 | 国内成人av | 亚洲午夜精品一区二区三区电影院 | 亚洲日本va午夜在线影院 | 在线观看一区二区视频 | 性色av免费在线观看 | 五月婷婷综合激情网 | 天堂va在线高清一区 | 精品久久久久久久久久岛国gif | 欧美最猛性xxxxx亚洲精品 | 99精品在线看 | 在线观看视频国产一区 | 麻豆视频www | 色 免费观看 | 亚洲激情在线播放 | av成年人电影| 日本韩国中文字幕 | 五月花丁香婷婷 | 色噜噜狠狠狠狠色综合久不 | 中文字幕亚洲欧美日韩2019 | 免费国产视频 | 天天操偷偷干 | 国产精品免费在线视频 | 中文字幕文字幕一区二区 | 999久久国产 | 激情网站网址 | 亚洲人人网 | 永久免费毛片 | 国产精品久久一区二区三区不卡 | 国产黄网在线 | 欧美成年黄网站色视频 | 亚洲另类在线视频 | 在线精品视频在线观看高清 | 成人免费观看网站 | 国内99视频| 国产小视频免费观看 | 婷婷丁香花五月天 | 天天综合成人 | 免费在线观看av网站 | 人人澡人人澡人人 | 婷婷五月情 | 久久久久久久99精品免费观看 | 日韩av一区二区在线影视 | 国产明星视频三级a三级点| 激情小说 五月 | 精品视频 | 国内精品在线观看视频 | 成人91视频| 91免费高清 | 久久久久久亚洲精品 | 九九九九九精品 | 日韩中文字幕免费在线播放 | 欧美一区二区三区在线播放 | 麻豆av一区二区三区在线观看 | 日韩在线视频网 | 午夜久久久精品 | 亚洲精品午夜一区人人爽 | 久久国产福利 | 最近中文字幕免费大全 | 久草网站在线观看 | 成人在线视频免费 | 国产美女视频 | 在线看av的网址 | 日日操日日插 | 久草国产在线 | 欧美极品久久 | 天天躁日日| 国产xxxxx在线观看 | 免费69视频| 亚洲小视频在线观看 | 99九九热只有国产精品 | 日韩精品不卡在线观看 | 国产热re99久久6国产精品 | av中文字幕免费在线观看 | 日本精品一区二区三区在线播放视频 | 国产免费黄视频在线观看 | 亚洲精品自在在线观看 | 亚洲一级片 | 亚洲国产理论片 | 久久综合九色九九 | 天天插天天操天天干 | 午夜三级在线 | 最近能播放的中文字幕 | 97超碰影视 | 国产黄色一级大片 | 成人午夜电影在线播放 | 操综合| 91精品国 | 天天干天天操天天做 | 国产精品福利无圣光在线一区 | 亚洲欧洲美洲av | 久久首页 | 91完整版观看 | 国产精品亚洲综合久久 | 人人爱爱 | 一区二区三区 中文字幕 | 天天做日日做天天爽视频免费 | 国产精品美女在线观看 | 久久久久久久久久久电影 | 午夜资源站| 久久精品视| 久久黄色网址 | 狠狠色噜噜狠狠狠狠2021天天 | 伊甸园永久入口www 99热 精品在线 | 国产一级视频免费看 | 日韩成人黄色av | 91片网| 久久在线免费 | 亚洲欧美日本一区二区三区 | 黄色一级大片在线免费看产 | avove黑丝| 国产精品成人国产乱一区 | 久久五月天综合 | 日韩区欠美精品av视频 | 97超碰免费在线 | 免费影视大全推荐 | 九色视频自拍 | 亚洲欧洲成人精品av97 | 狠狠色丁香久久婷婷综 | 色五丁香 | 97电影在线 | 成人午夜电影免费在线观看 | 97精品国产97久久久久久春色 | 欧美与欧洲交xxxx免费观看 | 中文字幕日本电影 | 456成人精品影院 | 在线观看视频免费播放 | 在线视频 你懂得 | 一区二区三区四区免费视频 | 日韩精品 在线视频 | 欧美性猛片| 亚洲国产三级在线观看 | 亚洲一区二区天堂 | 国产黄色精品在线 | 国产极品尤物在线 | 亚洲91精品在线观看 | 69性欧美| 91免费日韩 | 免费日韩视 | 在线免费观看黄色 | 日韩av一区二区在线 | 操操操影院 | av官网| 久久精品国产久精国产 | 国产欧美精品在线观看 | 91少妇精拍在线播放 | 国产又粗又猛又黄又爽 | 久久艹国产| 久久久久国产精品免费 | 欧美二区视频 | 97人人模人人爽人人少妇 | 精品久久久影院 | 国产精品久久久久久久久毛片 | 亚洲精选在线 | 亚洲日本va午夜在线影院 | 高清精品久久 | 欧美美女视频在线观看 | 成人av片在线观看 | 国产中文在线播放 | www.狠狠干 | 国产一区二区三区免费观看视频 | 欧洲视频一区 | 激情综合网在线观看 | www.国产视频 | 九九热精品视频在线播放 | 久久艹免费 | 国产精品观看在线亚洲人成网 | 99久久免费看 | 日韩综合在线观看 | 亚洲国产精品va在线看黑人动漫 | 天天色天天 | 国产成人精品a | 国产精品乱码在线 | 91福利小视频 | av字幕在线| 国产一级在线观看视频 | 91九色视频在线观看 | 日韩高清免费在线观看 | 最近中文字幕完整高清 | 久久国产色 | 亚洲va男人天堂 | 黄色毛片观看 | 国产激情久久久 | 国产色爽 | 麻豆久久久久 | 一区二区三区在线电影 | 亚洲精品乱码久久久久久蜜桃91 | 久章操| 丁香花中文字幕 | 综合铜03| 久久人人97超碰国产公开结果 | 中文字幕欧美日韩va免费视频 | www五月| 中文字幕精品www乱入免费视频 | 岛国av在线 | 黄色成人在线观看 | 日日夜夜狠狠 | 人人爱人人添 | 99视频久 | 欧美91av | 欧美另类色图 | 日日夜夜天天人人 | 国产一二三四在线视频 | 久久久69 | 国产一级免费电影 | 久久艹艹 | 极品中文字幕 | 国产精品久久久久免费a∨ 欧美一级性生活片 | 亚洲人成影院在线 | 欧美一二三视频 | 国产一区二区影院 | 国产日韩欧美在线免费观看 | 免费成人在线观看视频 | 免费观看性生交 | 狠狠干美女 | 亚洲精品麻豆 | 天天操福利视频 | 中文字幕亚洲精品在线观看 | 超碰午夜| 在线成人免费 | 亚洲精品综合一二三区在线观看 | 久久99久久久久 | 国产 日韩 欧美 在线 | 久操免费视频 | 2019天天干天天色 | 人人爽人人舔 | 正在播放日韩 | 国产黄色高清 | 精品一区av | 婷婷www | 久久国产精品视频观看 | 国产高清一 | 亚洲人成人在线 | 青青看片 | 综合网色 | 日韩精品久久久久久中文字幕8 | 一级黄色免费 | 91免费试看 | 亚洲一区二区三区毛片 | 超碰99人人 | 久久人人爽人人爽人人片av软件 | 狠狠五月天 | 免费av成人在线 | 日韩在线观看第一页 | 丁香影院在线 | 97超碰在线视 | 欧美色888 | 久久男人中文字幕资源站 | 97热视频 | 国产在线a不卡 | 9在线观看免费高清完整版在线观看明 | 国产黄色成人av | 中文字幕麻豆 | 欧美嫩草影院 | 亚洲精品福利在线 | 久久国内精品 | 97精品在线| 亚洲欧美色婷婷 | 婷婷丁香花五月天 | 欧美日韩精品在线一区二区 | 手机av在线网站 | 免费在线观看av片 | 国产一区av在线 | 西西444www | 国产网红在线观看 | 狠狠狠狠狠色综合 | 久久av黄色| 日韩av女优视频 | 欧美日韩免费网站 | 岛国av在线免费 | 97色婷婷成人综合在线观看 | 国产小视频网站 | 69久久夜色精品国产69 | 久久国产露脸精品国产 | www亚洲国产 | 久久与婷婷 | 欧美一区在线观看视频 | 国产最新91 | 免费看三级黄色片 | 一区二区理论片 | 91麻豆产精品久久久久久 | 久久久久成人精品免费播放动漫 | 国产午夜三级一区二区三桃花影视 | 国产精品原创av片国产免费 | 亚洲天天做 | 毛片的网址 | 天堂网中文在线 | 成 人 黄 色 视频播放1 | 欧美一进一出抽搐大尺度视频 | 久久久精华网 | 不卡在线一区 | 一区二区三区三区在线 | 欧美日韩亚洲第一 | 免费视频区 | 91亚洲网站 | 国产精品v欧美精品 | 欧美日韩一区二区在线 | 久久美女视频 | 99热这里有精品 | 欧美日韩免费看 | 欧美孕妇与黑人孕交 | 免费在线观看av网址 | 日韩三级在线 | 黄色软件视频网站 | 国产一区二区在线视频观看 | 丁香婷婷激情网 | 在线观看网站你懂的 | 99久久99久久| 亚洲japanese制服美女 | 欧美日韩国产精品一区二区三区 | 日女人电影| 西西444www大胆高清图片 | 九九电影在线 | 欧美色就是色 | 精品国产一区二 | 久久精品精品电影网 | 人人爽影院 | 亚洲精品乱码久久久久久蜜桃不爽 | 婷婷网在线 | 精品伦理一区二区三区 | 免费在线观看av网站 | 午夜精品久久久久久久99热影院 | 天堂av在线 | 国产成人精品午夜在线播放 | 久久精品一区二区三区国产主播 | 国产91区| 日本二区三区在线 | 日韩二级毛片 | 中文字幕在线视频一区二区三区 | 91精品国产乱码久久桃 | 欧美一级裸体视频 | 国产成人一级 | 日本精品免费看 | a级国产毛片 | 国产精久久久 | 在线黄色观看 | 麻豆免费看片 | 久久视频国产精品免费视频在线 | 欧美淫视频 | 成年人免费在线 | 66av99精品福利视频在线 | 国产精品资源在线观看 | 精品免费久久久久久 | 国产免费一区二区三区最新 | 亚洲综合网站在线观看 | 亚洲五月花 | 色综合久久精品 | 国产最顶级的黄色片在线免费观看 | 中文字幕在线高清 | 亚洲综合激情 | 麻豆成人精品 | 午夜视频二区 | 91av视频| 在线视频观看成人 | 亚洲欧美日韩一级 | 久久66热这里只有精品 | 中文字幕在线视频网站 | www.福利视频| 午夜精品一区二区三区免费视频 | 亚洲视频六区 | 激情五月六月婷婷 | 中文字幕日韩免费视频 | 日韩在线视频不卡 | 玖操 | 免费99| 日韩字幕 | 一区在线观看 | 亚洲精品国内 | 在线观看国产一区二区 | 亚洲精品欧美视频 | 亚洲在线不卡 | 亚洲资源视频 | 国产一级在线 | 国产中文字幕在线免费观看 | 一区二区不卡视频在线观看 | 天天av资源| 国产精品一区二区你懂的 | 久久久久久久国产精品 | 婷婷色中文网 | 久久影院午夜论 | 国产一区二区精品在线 | 在线观看www视频 | 色狠狠综合天天综合综合 | 欧美淫视频| 日韩理论电影网 | 久久综合九色综合久久久精品综合 | 国产一级二级三级在线观看 | 91社区国产高清 | 99久久精品免费看国产一区二区三区 | 97色狠狠 | 日韩中文字幕在线观看 | 欧美精品久久人人躁人人爽 | 日韩在线精品 | 高清不卡一区二区在线 | 色婷婷激情综合 | 日日干影院 | av电影免费在线播放 | 久久看片网站 | 欧美另类v | 日韩xxxx视频| 97国产大学生情侣酒店的特点 | 精品国产亚洲一区二区麻豆 | 中文字幕av播放 | 在线观看岛国av | 国产亚洲精品bv在线观看 | 337p日本欧洲亚洲大胆裸体艺术 | 天天射成人 | 国内精品久久天天躁人人爽 | 九九九九精品九九九九 | 999久久精品 | 久久66热这里只有精品 | 亚洲乱码在线观看 | av大全在线播放 | 色婷婷免费视频 | а天堂中文最新一区二区三区 | 天天干天天草天天爽 | 日韩区欧美久久久无人区 | 色天天综合网 | 日本久久久久久久久 | 国产一级免费视频 | 亚洲精品国产精品久久99 | 婷婷久月| 免费观看成人 | 欧美日韩三级 | 国产在线观看,日本 | 麻豆视频免费入口 | 91九色在线视频 | 1024久久| 久久精品99精品国产香蕉 | 午夜电影久久久 | 久草视频在线免费 | 国产精品白虎 | 黄色亚洲免费 | 亚洲狠狠操 | 国产成人1区 | 日韩色高清 | 成人网大片 | 在线免费国产视频 | 在线观看一区 | 国产手机视频精品 | 人人草在线视频 | 乱男乱女www7788| 婷婷在线免费视频 | 久久久久久久久久久久电影 | 成人免费观看视频网站 | 五月天伊人网 | 一区二区欧美在线观看 | 欧美看片| 91久久爱热色涩涩 | 麻豆国产网站 | 亚洲免费国产视频 | 在线免费高清一区二区三区 | 麻豆传媒精品 | 亚洲一区二区视频在线 | 久久久电影 | 日韩在线观看视频中文字幕 | 国产免码va在线观看免费 | 国产精品免费久久久久 | 国产亲近乱来精品 | 日韩av在线看 | 在线观看久久久久久 | 日本久久久精品视频 | 日韩国产欧美视频 | 天天天天射 | 久久这里只有精品9 | 日韩成人精品在线观看 | 国产视频日韩视频欧美视频 | 在线观看香蕉视频 | 国内精品久久久久久久久 | 国产一级a毛片视频爆浆 | av理论电影 | 免费高清无人区完整版 | 91激情小视频 | 九九在线免费视频 | 在线视频成人 | 婷婷激情站 | 五月天.com| 国产欧美在线一区 | 免费日韩一区二区三区 | 日韩精品中字 | 激情欧美在线观看 | 在线中文日韩 | 国产精品欧美久久久久三级 | 天堂av网在线 | 欧美一区二区伦理片 | 亚洲理论片在线观看 | 最近中文字幕完整视频高清1 | 国内丰满少妇猛烈精品播 | 日韩影视大全 | 日韩av片无码一区二区不卡电影 | 又黄又爽的免费高潮视频 | 亚洲国产中文字幕在线 | 久久精品网站视频 | www夜夜操com | 国产亚洲综合性久久久影院 | 久久不射电影网 | 国产女教师精品久久av | 日韩午夜精品福利 | 久久天天躁狠狠躁夜夜不卡公司 | 久久久久久久国产精品影院 | 久久久久国产一区二区三区四区 | 日韩精品一区不卡 | 黄色福利视频网站 | 在线观看黄 | 精品视频成人 | 97在线看| 国产乱对白刺激视频不卡 | 亚洲免费在线 | 不卡精品视频 | 久久社区视频 | 九色精品免费永久在线 | 国产精品成人一区 | 亚洲精品一区二区三区在线观看 | 亚洲国产高清视频 | 免费看国产一级片 | 国产精品99久久久久久宅男 | 免费观看性生交大片3 | 91香蕉视频 mp4 | 91九色pron| 免费看的国产视频网站 | 不卡视频国产 | 久9在线| 久草爱 | 国产一级精品在线观看 | 久久久夜色 | 免费看毛片在线 | 婷婷亚洲激情 | 99精品视频免费看 | 高潮久久久 | 日韩性网站 | 黄色在线免费观看网站 | 久久久久成人精品 | 九九热在线免费观看 | 夜夜爽www | av解说在线观看 | 国产精品入口a级 | 久久精品亚洲精品国产欧美 | 99草视频在线观看 | 午夜私人影院 | 黄色大片入口 | 中文字幕网站视频在线 | 欧美精品少妇xxxxx喷水 | 九九免费在线观看 | 国产精品亚洲片在线播放 | 婷婷精品国产一区二区三区日韩 | 最近乱久中文字幕 | 69久久99精品久久久久婷婷 | 国产视频精品久久 | 国产高清成人av | 日韩国产在线观看 | 亚洲狠狠操 | 操久久免费视频 | 人人cao| 亚洲成人影音 | 久久视频一区二区 | 玖玖999| 天天操导航 | 久久精品专区 | 婷婷国产一区二区三区 | 亚洲国内精品视频 | 成人黄色在线观看视频 | 国产精品永久久久久久久www | 精品久久久久久久久久久久久久久久 | 午夜久久久久久久久久久 | 国产精品999久久久 久产久精国产品 | 欧美性生活免费看 | 久久久官网 | 999精品网 | 国产日本在线 | 久久你懂得 | 色综合天天综合网国产成人网 | 亚洲天堂在线观看完整版 | 日韩欧美电影在线观看 | 99re久久资源最新地址 | 国产 日韩 中文字幕 | 国产高清在线精品 | 最近日本韩国中文字幕 | 国产精品久久久久久影院 | 99精品视频在线观看播放 | 久久在现 | 欧美一级欧美一级 | 中文字幕精品www乱入免费视频 | 国产视频一二区 | 狠狠色丁香久久婷婷综合五月 |