日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) >

今日arXiv精选 | 34篇顶会论文:CIKM/ ACL/ Interspeech/ ICCV/ ACM MM

發(fā)布時(shí)間:2024/10/8 35 豆豆
生活随笔 收集整理的這篇文章主要介紹了 今日arXiv精选 | 34篇顶会论文:CIKM/ ACL/ Interspeech/ ICCV/ ACM MM 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

?關(guān)于?#今日arXiv精選?

這是「AI 學(xué)術(shù)前沿」旗下的一檔欄目,編輯將每日從arXiv中精選高質(zhì)量論文,推送給讀者。

DESYR: Definition and Syntactic Representation Based Claim Detection on the Web

Comment: 10 pages, Accepted at CIKM 2021

Link:?http://arxiv.org/abs/2108.08759

Abstract

The formulation of a claim rests at the core of argument mining. To demarcatebetween a claim and a non-claim is arduous for both humans and machines, owingto latent linguistic variance between the two and the inadequacy of extensivedefinition-based formalization. Furthermore, the increase in the usage ofonline social media has resulted in an explosion of unsolicited information onthe web presented as informal text. To account for the aforementioned, in thispaper, we proposed DESYR. It is a framework that intends on annulling the saidissues for informal web-based text by leveraging a combination of hierarchicalrepresentation learning (dependency-inspired Poincare embedding),definition-based alignment, and feature projection. We do away with fine-tuningcomputer-heavy language models in favor of fabricating a more domain-centricbut lighter approach. Experimental results indicate that DESYR builds upon thestate-of-the-art system across four benchmark claim datasets, most of whichwere constructed with informal texts. We see an increase of 3 claim-F1 pointson the LESA-Twitter dataset, an increase of 1 claim-F1 point and 9 macro-F1points on the Online Comments(OC) dataset, an increase of 24 claim-F1 pointsand 17 macro-F1 points on the Web Discourse(WD) dataset, and an increase of 8claim-F1 points and 5 macro-F1 points on the Micro Texts(MT) dataset. We alsoperform an extensive analysis of the results. We make a 100-D pre-trainedversion of our Poincare-variant along with the source code.

Fine-Grained Element Identification in Complaint Text of Internet Fraud

Comment: 5 pages, 5 figures, 3 tables accepted as a short paper to CIKM 2021

Link:?http://arxiv.org/abs/2108.08676

Abstract

Existing system dealing with online complaint provides a final decisionwithout explanations. We propose to analyse the complaint text of internetfraud in a fine-grained manner. Considering the complaint text includesmultiple clauses with various functions, we propose to identify the role ofeach clause and classify them into different types of fraud element. Weconstruct a large labeled dataset originated from a real finance serviceplatform. We build an element identification model on top of BERT and proposeadditional two modules to utilize the context of complaint text for betterelement label classification, namely, global context encoder and label refiner.Experimental results show the effectiveness of our model.

Language Model Augmented Relevance Score

Comment: In ACL 2021

Link:?http://arxiv.org/abs/2108.08485

Abstract

Although automated metrics are commonly used to evaluate NLG systems, theyoften correlate poorly with human judgements. Newer metrics such as BERTScorehave addressed many weaknesses in prior metrics such as BLEU and ROUGE, whichrely on n-gram matching. These newer methods, however, are still limited inthat they do not consider the generation context, so they cannot properlyreward generated text that is correct but deviates from the given reference. ?In this paper, we propose Language Model Augmented Relevance Score (MARS), anew context-aware metric for NLG evaluation. MARS leverages off-the-shelflanguage models, guided by reinforcement learning, to create augmentedreferences that consider both the generation context and available humanreferences, which are then used as additional references to score generatedtext. Compared with seven existing metrics in three common NLG tasks, MARS notonly achieves higher correlation with human reference judgements, but alsodifferentiates well-formed candidates from adversarial samples to a largerdegree.

QUEACO: Borrowing Treasures from Weakly-labeled Behavior Data for Query Attribute Value Extraction

Comment: The 30th ACM International Conference on Information and Knowledge ?Management (CIKM 2021, Applied Research Track)

Link:?http://arxiv.org/abs/2108.08468

Abstract

We study the problem of query attribute value extraction, which aims toidentify named entities from user queries as diverse surface form attributevalues and afterward transform them into formally canonical forms. Such aproblem consists of two phases: {named entity recognition (NER)} and {attributevalue normalization (AVN)}. However, existing works only focus on the NER phasebut neglect equally important AVN. To bridge this gap, this paper proposes aunified query attribute value extraction system in e-commerce search namedQUEACO, which involves both two phases. Moreover, by leveraging large-scaleweakly-labeled behavior data, we further improve the extraction performancewith less supervision cost. Specifically, for the NER phase, QUEACO adopts anovel teacher-student network, where a teacher network that is trained on thestrongly-labeled data generates pseudo-labels to refine the weakly-labeled datafor training a student network. Meanwhile, the teacher network can bedynamically adapted by the feedback of the student's performance onstrongly-labeled data to maximally denoise the noisy supervisions from the weaklabels. For the AVN phase, we also leverage the weakly-labeledquery-to-attribute behavior data to normalize surface form attribute valuesfrom queries into canonical forms from products. Extensive experiments on areal-world large-scale E-commerce dataset demonstrate the effectiveness ofQUEACO.

Augmenting Slot Values and Contexts for Spoken Language Understanding with Pretrained Models

Comment: Accepted by Interspeech2021

Link:?http://arxiv.org/abs/2108.08451

Abstract

Spoken Language Understanding (SLU) is one essential step in building adialogue system. Due to the expensive cost of obtaining the labeled data, SLUsuffers from the data scarcity problem. Therefore, in this paper, we focus ondata augmentation for slot filling task in SLU. To achieve that, we aim atgenerating more diverse data based on existing data. Specifically, we try toexploit the latent language knowledge from pretrained language models byfinetuning them. We propose two strategies for finetuning process: value-basedand context-based augmentation. Experimental results on two public SLU datasetshave shown that compared with existing data augmentation methods, our proposedmethod can generate more diverse sentences and significantly improve theperformance on SLU.

Graph-to-3D: End-to-End Generation and Manipulation of 3D Scenes Using Scene Graphs

Comment: accepted to ICCV 2021

Link:?http://arxiv.org/abs/2108.08841

Abstract

Controllable scene synthesis consists of generating 3D information thatsatisfy underlying specifications. Thereby, these specifications should beabstract, i.e. allowing easy user interaction, whilst providing enoughinterface for detailed control. Scene graphs are representations of a scene,composed of objects (nodes) and inter-object relationships (edges), proven tobe particularly suited for this task, as they allow for semantic control on thegenerated content. Previous works tackling this task often rely on syntheticdata, and retrieve object meshes, which naturally limits the generationcapabilities. To circumvent this issue, we instead propose the first work thatdirectly generates shapes from a scene graph in an end-to-end manner. Inaddition, we show that the same model supports scene modification, using therespective scene graph as interface. Leveraging Graph Convolutional Networks(GCN) we train a variational Auto-Encoder on top of the object and edgecategories, as well as 3D shapes and scene layouts, allowing latter sampling ofnew scenes and shapes.

PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers

Comment: Accepted to ICCV 2021 (Oral Presentation)

Link:?http://arxiv.org/abs/2108.08839

Abstract

Point clouds captured in real-world applications are often incomplete due tothe limited sensor resolution, single viewpoint, and occlusion. Therefore,recovering the complete point clouds from partial ones becomes an indispensabletask in many practical applications. In this paper, we present a new methodthat reformulates point cloud completion as a set-to-set translation problemand design a new model, called PoinTr that adopts a transformer encoder-decoderarchitecture for point cloud completion. By representing the point cloud as aset of unordered groups of points with position embeddings, we convert thepoint cloud to a sequence of point proxies and employ the transformers forpoint cloud generation. To facilitate transformers to better leverage theinductive bias about 3D geometric structures of point clouds, we further devisea geometry-aware block that models the local geometric relationshipsexplicitly. The migration of transformers enables our model to better learnstructural knowledge and preserve detailed information for point cloudcompletion. Furthermore, we propose two more challenging benchmarks with morediverse incomplete point clouds that can better reflect the real-worldscenarios to promote future research. Experimental results show that our methodoutperforms state-of-the-art methods by a large margin on both the newbenchmarks and the existing ones. Code is available athttps://github.com/yuxumin/PoinTr

Fine-grained Semantics-aware Representation Enhancement for Self-supervised Monocular Depth Estimation

Comment: ICCV 2021 (Oral)

Link:?http://arxiv.org/abs/2108.08829

Abstract

Self-supervised monocular depth estimation has been widely studied, owing toits practical importance and recent promising improvements. However, most workssuffer from limited supervision of photometric consistency, especially in weaktexture regions and at object boundaries. To overcome this weakness, we proposenovel ideas to improve self-supervised monocular depth estimation by leveragingcross-domain information, especially scene semantics. We focus on incorporatingimplicit semantic knowledge into geometric representation enhancement andsuggest two ideas: a metric learning approach that exploits thesemantics-guided local geometry to optimize intermediate depth representationsand a novel feature fusion module that judiciously utilizes cross-modalitybetween two heterogeneous feature representations. We comprehensively evaluateour methods on the KITTI dataset and demonstrate that our method outperformsstate-of-the-art methods. The source code is available athttps://github.com/hyBlue/FSRE-Depth.

Towards Vivid and Diverse Image Colorization with Generative Color Prior

Comment: ICCV 2021

Link:?http://arxiv.org/abs/2108.08826

Abstract

Colorization has attracted increasing interest in recent years. Classicreference-based methods usually rely on external color images for plausibleresults. A large image database or online search engine is inevitably requiredfor retrieving such exemplars. Recent deep-learning-based methods couldautomatically colorize images at a low cost. However, unsatisfactory artifactsand incoherent colors are always accompanied. In this work, we aim atrecovering vivid colors by leveraging the rich and diverse color priorsencapsulated in a pretrained Generative Adversarial Networks (GAN).Specifically, we first "retrieve" matched features (similar to exemplars) via aGAN encoder and then incorporate these features into the colorization processwith feature modulations. Thanks to the powerful generative color prior anddelicate designs, our method could produce vivid colors with a single forwardpass. Moreover, it is highly convenient to obtain diverse results by modifyingGAN latent codes. Our method also inherits the merit of interpretable controlsof GANs and could attain controllable and smooth transitions by walking throughGAN latent space. Extensive experiments and user studies demonstrate that ourmethod achieves superior performance than previous works.

Click to Move: Controlling Video Generation with Sparse Motion

Comment: Accepted by International Conference on Computer Vision (ICCV 2021)

Link:?http://arxiv.org/abs/2108.08815

Abstract

This paper introduces Click to Move (C2M), a novel framework for videogeneration where the user can control the motion of the synthesized videothrough mouse clicks specifying simple object trajectories of the key objectsin the scene. Our model receives as input an initial frame, its correspondingsegmentation map and the sparse motion vectors encoding the input provided bythe user. It outputs a plausible video sequence starting from the given frameand with a motion that is consistent with user input. Notably, our proposeddeep architecture incorporates a Graph Convolution Network (GCN) modelling themovements of all the objects in the scene in a holistic manner and effectivelycombining the sparse user motion information and image features. Experimentalresults show that C2M outperforms existing methods on two publicly availabledatasets, thus demonstrating the effectiveness of our GCN framework atmodelling object interactions. The source code is publicly available athttps://github.com/PierfrancescoArdino/C2M.

Causal Attention for Unbiased Visual Recognition

Comment: Accepted by ICCV 2021

Link:?http://arxiv.org/abs/2108.08782

Abstract

Attention module does not always help deep models learn causal features thatare robust in any confounding context, e.g., a foreground object feature isinvariant to different backgrounds. This is because the confounders trick theattention to capture spurious correlations that benefit the prediction when thetraining and testing data are IID (identical & independent distribution); whileharm the prediction when the data are OOD (out-of-distribution). The solefundamental solution to learn causal attention is by causal intervention, whichrequires additional annotations of the confounders, e.g., a "dog" model islearned within "grass+dog" and "road+dog" respectively, so the "grass" and"road" contexts will no longer confound the "dog" recognition. However, suchannotation is not only prohibitively expensive, but also inherentlyproblematic, as the confounders are elusive in nature. In this paper, wepropose a causal attention module (CaaM) that self-annotates the confounders inunsupervised fashion. In particular, multiple CaaMs can be stacked andintegrated in conventional attention CNN and self-attention Vision Transformer.In OOD settings, deep models with CaaM outperform those without itsignificantly; even in IID settings, the attention localization is alsoimproved by CaaM, showing a great potential in applications that require robustvisual saliency. Codes are available at \url{https://github.com/Wangt-CN/CaaM}.

Learning to Match Features with Seeded Graph Matching Network

Comment: Accepted by ICCV2021, code to be realeased at ?https://github.com/vdvchen/SGMNet

Link:?http://arxiv.org/abs/2108.08771

Abstract

Matching local features across images is a fundamental problem in computervision. Targeting towards high accuracy and efficiency, we propose Seeded GraphMatching Network, a graph neural network with sparse structure to reduceredundant connectivity and learn compact representation. The network consistsof 1) Seeding Module, which initializes the matching by generating a small setof reliable matches as seeds. 2) Seeded Graph Neural Network, which utilizesseed matches to pass messages within/across images and predicts assignmentcosts. Three novel operations are proposed as basic elements for messagepassing: 1) Attentional Pooling, which aggregates keypoint features within theimage to seed matches. 2) Seed Filtering, which enhances seed features andexchanges messages across images. 3) Attentional Unpooling, which propagatesseed features back to original keypoints. Experiments show that our methodreduces computational and memory complexity significantly compared with typicalattention-based networks while competitive or higher performance is achieved.

Category-Level 6D Object Pose Estimation via Cascaded Relation and Recurrent Reconstruction Networks

Comment: accepted by IROS2021

Link:?http://arxiv.org/abs/2108.08755

Abstract

Category-level 6D pose estimation, aiming to predict the location andorientation of unseen object instances, is fundamental to many scenarios suchas robotic manipulation and augmented reality, yet still remains unsolved.Precisely recovering instance 3D model in the canonical space and accuratelymatching it with the observation is an essential point when estimating 6D posefor unseen objects. In this paper, we achieve accurate category-level 6D poseestimation via cascaded relation and recurrent reconstruction networks.Specifically, a novel cascaded relation network is dedicated for advancedrepresentation learning to explore the complex and informative relations amonginstance RGB image, instance point cloud and category shape prior. Furthermore,we design a recurrent reconstruction network for iterative residual refinementto progressively improve the reconstruction and correspondence estimations fromcoarse to fine. Finally, the instance 6D pose is obtained leveraging theestimated dense correspondences between the instance point cloud and thereconstructed 3D model in the canonical space. We have conducted extensiveexperiments on two well-acknowledged benchmarks of category-level 6D poseestimation, with significant performance improvement over existing approaches.On the representatively strict evaluation metrics of $3D_{75}$ and $5^{\circ}2cm$, our method exceeds the latest state-of-the-art SPD by $4.9\%$ and $17.7\%$on the CAMERA25 dataset, and by $2.7\%$ and $8.5\%$ on the REAL275 dataset.Codes are available at https://wangjiaze.cn/projects/6DPoseEstimation.html.

Counterfactual Attention Learning for Fine-Grained Visual Categorization and Re-identification

Comment: Accepted to ICCV 2021

Link:?http://arxiv.org/abs/2108.08728

Abstract

Attention mechanism has demonstrated great potential in fine-grained visualrecognition tasks. In this paper, we present a counterfactual attentionlearning method to learn more effective attention based on causal inference.Unlike most existing methods that learn visual attention based on conventionallikelihood, we propose to learn the attention with counterfactual causality,which provides a tool to measure the attention quality and a powerfulsupervisory signal to guide the learning process. Specifically, we analyze theeffect of the learned visual attention on network prediction throughcounterfactual intervention and maximize the effect to encourage the network tolearn more useful attention for fine-grained image recognition. Empirically, weevaluate our method on a wide range of fine-grained recognition tasks whereattention plays a crucial role, including fine-grained image categorization,person re-identification, and vehicle re-identification. The consistentimprovement on all benchmarks demonstrates the effectiveness of our method.Code is available at https://github.com/raoyongming/CAL

How to cheat with metrics in single-image HDR reconstruction

Comment: ICCV 2021 workshop on Learning for Computational Imaging (LCI)

Link:?http://arxiv.org/abs/2108.08713

Abstract

Single-image high dynamic range (SI-HDR) reconstruction has recently emergedas a problem well-suited for deep learning methods. Each successive techniquedemonstrates an improvement over existing methods by reporting higher imagequality scores. This paper, however, highlights that such improvements inobjective metrics do not necessarily translate to visually superior images. Thefirst problem is the use of disparate evaluation conditions in terms of dataand metric parameters, calling for a standardized protocol to make it possibleto compare between papers. The second problem, which forms the main focus ofthis paper, is the inherent difficulty in evaluating SI-HDR reconstructionssince certain aspects of the reconstruction problem dominate objectivedifferences, thereby introducing a bias. Here, we reproduce a typicalevaluation using existing as well as simulated SI-HDR methods to demonstratehow different aspects of the problem affect objective quality metrics.Surprisingly, we found that methods that do not even reconstruct HDRinformation can compete with state-of-the-art deep learning methods. We showhow such results are not representative of the perceived quality and thatSI-HDR reconstruction needs better evaluation protocols.

Real-time Image Enhancer via Learnable Spatial-aware 3D Lookup Tables

Comment: Accepted to ICCV2021

Link:?http://arxiv.org/abs/2108.08697

Abstract

Recently, deep learning-based image enhancement algorithms achievedstate-of-the-art (SOTA) performance on several publicly available datasets.However, most existing methods fail to meet practical requirements either forvisual perception or for computation efficiency, especially for high-resolutionimages. In this paper, we propose a novel real-time image enhancer vialearnable spatial-aware 3-dimentional lookup tables(3D LUTs), which wellconsiders global scenario and local spatial information. Specifically, weintroduce a light weight two-head weight predictor that has two outputs. One isa 1D weight vector used for image-level scenario adaptation, the other is a 3Dweight map aimed for pixel-wise category fusion. We learn the spatial-aware 3DLUTs and fuse them according to the aforementioned weights in an end-to-endmanner. The fused LUT is then used to transform the source image into thetarget tone in an efficient way. Extensive results show that our modeloutperforms SOTA image enhancement methods on public datasets both subjectivelyand objectively, and that our model only takes about 4ms to process a 4Kresolution image on one NVIDIA V100 GPU.

3DIAS: 3D Shape Reconstruction with Implicit Algebraic Surfaces

Comment: Published at ICCV 2021

Link:?http://arxiv.org/abs/2108.08653

Abstract

3D Shape representation has substantial effects on 3D shape reconstruction.Primitive-based representations approximate a 3D shape mainly by a set ofsimple implicit primitives, but the low geometrical complexity of theprimitives limits the shape resolution. Moreover, setting a sufficient numberof primitives for an arbitrary shape is challenging. To overcome these issues,we propose a constrained implicit algebraic surface as the primitive with fewlearnable coefficients and higher geometrical complexities and a deep neuralnetwork to produce these primitives. Our experiments demonstrate thesuperiorities of our method in terms of representation power compared to thestate-of-the-art methods in single RGB image 3D shape reconstruction.Furthermore, we show that our method can semantically learn segments of 3Dshapes in an unsupervised manner. The code is publicly available fromhttps://myavartanoo.github.io/3dias/ .

Spatio-Temporal Interaction Graph Parsing Networks for Human-Object Interaction Recognition

Comment: ACM MM Oral paper

Link:?http://arxiv.org/abs/2108.08633

Abstract

For a given video-based Human-Object Interaction scene, modeling thespatio-temporal relationship between humans and objects are the important cueto understand the contextual information presented in the video. With theeffective spatio-temporal relationship modeling, it is possible not only touncover contextual information in each frame but also to directly captureinter-time dependencies. It is more critical to capture the position changes ofhuman and objects over the spatio-temporal dimension when their appearancefeatures may not show up significant changes over time. The full use ofappearance features, the spatial location and the semantic information are alsothe key to improve the video-based Human-Object Interaction recognitionperformance. In this paper, Spatio-Temporal Interaction Graph Parsing Networks(STIGPN) are constructed, which encode the videos with a graph composed ofhuman and object nodes. These nodes are connected by two types of relations:(i) spatial relations modeling the interactions between human and theinteracted objects within each frame. (ii) inter-time relations capturing thelong range dependencies between human and the interacted objects across frame.With the graph, STIGPN learn spatio-temporal features directly from the wholevideo-based Human-Object Interaction scenes. Multi-modal features and amulti-stream fusion strategy are used to enhance the reasoning capability ofSTIGPN. Two Human-Object Interaction video datasets, including CAD-120 andSomething-Else, are used to evaluate the proposed architectures, and thestate-of-the-art performance demonstrates the superiority of STIGPN.

VolumeFusion: Deep Depth Fusion for 3D Scene Reconstruction

Comment: ICCV 2021 Accepted

Link:?http://arxiv.org/abs/2108.08623

Abstract

To reconstruct a 3D scene from a set of calibrated views, traditionalmulti-view stereo techniques rely on two distinct stages: local depth mapscomputation and global depth maps fusion. Recent studies concentrate on deepneural architectures for depth estimation by using conventional depth fusionmethod or direct 3D reconstruction network by regressing Truncated SignedDistance Function (TSDF). In this paper, we advocate that replicating thetraditional two stages framework with deep neural networks improves both theinterpretability and the accuracy of the results. As mentioned, our networkoperates in two steps: 1) the local computation of the local depth maps with adeep MVS technique, and, 2) the depth maps and images' features fusion to builda single TSDF volume. In order to improve the matching performance betweenimages acquired from very different viewpoints (e.g., large-baseline androtations), we introduce a rotation-invariant 3D convolution kernel calledPosedConv. The effectiveness of the proposed architecture is underlined via alarge series of experiments conducted on the ScanNet dataset where our approachcompares favorably against both traditional and deep learning techniques.

Spatially-Adaptive Image Restoration using Distortion-Guided Networks

Comment: Accepted at ICCV 2021

Link:?http://arxiv.org/abs/2108.08617

Abstract

We present a general learning-based solution for restoring images sufferingfrom spatially-varying degradations. Prior approaches are typicallydegradation-specific and employ the same processing across different images anddifferent pixels within. However, we hypothesize that such spatially rigidprocessing is suboptimal for simultaneously restoring the degraded pixels aswell as reconstructing the clean regions of the image. To overcome thislimitation, we propose SPAIR, a network design that harnessesdistortion-localization information and dynamically adjusts computation todifficult regions in the image. SPAIR comprises of two components, (1) alocalization network that identifies degraded pixels, and (2) a restorationnetwork that exploits knowledge from the localization network in filter andfeature domain to selectively and adaptively restore degraded pixels. Our keyidea is to exploit the non-uniformity of heavy degradations in spatial-domainand suitably embed this knowledge within distortion-guided modules performingsparse normalization, feature extraction and attention. Our architecture isagnostic to physical formation model and generalizes across several types ofspatially-varying degradations. We demonstrate the efficacy of SPAIRindividually on four restoration tasks-removal of rain-streaks, raindrops,shadows and motion blur. Extensive qualitative and quantitative comparisonswith prior art on 11 benchmark datasets demonstrate that ourdegradation-agnostic network design offers significant performance gains overstate-of-the-art degradation-specific architectures. Code available athttps://github.com/human-analysis/spatially-adaptive-image-restoration.

Feature Stylization and Domain-aware Contrastive Learning for Domain Generalization

Comment: Accepted to ACM MM 2021 (oral)

Link:?http://arxiv.org/abs/2108.08596

Abstract

Domain generalization aims to enhance the model robustness against domainshift without accessing the target domain. Since the available source domainsfor training are limited, recent approaches focus on generating samples ofnovel domains. Nevertheless, they either struggle with the optimization problemwhen synthesizing abundant domains or cause the distortion of class semantics.To these ends, we propose a novel domain generalization framework where featurestatistics are utilized for stylizing original features to ones with noveldomain properties. To preserve class information during stylization, we firstdecompose features into high and low frequency components. Afterward, westylize the low frequency components with the novel domain styles sampled fromthe manipulated statistics, while preserving the shape cues in high frequencyones. As the final step, we re-merge both components to synthesize novel domainfeatures. To enhance domain robustness, we utilize the stylized features tomaintain the model consistency in terms of features as well as outputs. Weachieve the feature consistency with the proposed domain-aware supervisedcontrastive loss, which ensures domain invariance while increasing classdiscriminability. Experimental results demonstrate the effectiveness of theproposed feature stylization and the domain-aware contrastive loss. Throughquantitative comparisons, we verify the lead of our method upon existingstate-of-the-art methods on two benchmarks, PACS and Office-Home.

3D Shapes Local Geometry Codes Learning with SDF

Comment: DLGC workshop in ICCV 2021

Link:?http://arxiv.org/abs/2108.08593

Abstract

A signed distance function (SDF) as the 3D shape description is one of themost effective approaches to represent 3D geometry for rendering andreconstruction. Our work is inspired by the state-of-the-art method DeepSDFthat learns and analyzes the 3D shape as the iso-surface of its shell and thismethod has shown promising results especially in the 3D shape reconstructionand compression domain. In this paper, we consider the degeneration problem ofreconstruction coming from the capacity decrease of the DeepSDF model, whichapproximates the SDF with a neural network and a single latent code. We proposeLocal Geometry Code Learning (LGCL), a model that improves the original DeepSDFresults by learning from a local shape geometry of the full 3D shape. We add anextra graph neural network to split the single transmittable latent code into aset of local latent codes distributed on the 3D shape. Mentioned latent codesare used to approximate the SDF in their local regions, which will alleviatethe complexity of the approximation compared to the original DeepSDF.Furthermore, we introduce a new geometric loss function to facilitate thetraining of these local latent codes. Note that other local shape adjustingmethods use the 3D voxel representation, which in turn is a problem highlydifficult to solve or even is insolvable. In contrast, our architecture isbased on graph processing implicitly and performs the learning regressionprocess directly in the latent code space, thus make the proposed architecturemore flexible and also simple for realization. Our experiments on 3D shapereconstruction demonstrate that our LGCL method can keep more details with asignificantly smaller size of the SDF decoder and outperforms considerably theoriginal DeepSDF method under the most important quantitative metrics.

Exploiting Scene Graphs for Human-Object Interaction Detection

Comment: Accepted to ICCV 2021

Link:?http://arxiv.org/abs/2108.08584

Abstract

Human-Object Interaction (HOI) detection is a fundamental visual task aimingat localizing and recognizing interactions between humans and objects. Existingworks focus on the visual and linguistic features of humans and objects.However, they do not capitalise on the high-level and semantic relationshipspresent in the image, which provides crucial contextual and detailed relationalknowledge for HOI inference. We propose a novel method to exploit thisinformation, through the scene graph, for the Human-Object Interaction (SG2HOI)detection task. Our method, SG2HOI, incorporates the SG information in twoways: (1) we embed a scene graph into a global context clue, serving as thescene-specific environmental context; and (2) we build a relation-awaremessage-passing module to gather relationships from objects' neighborhood andtransfer them into interactions. Empirical evaluation shows that our SG2HOImethod outperforms the state-of-the-art methods on two benchmark HOI datasets:V-COCO and HICO-DET. Code will be available at https://github.com/ht014/SG2HOI.

StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation

Comment: Accepted by ICCV2021. Project is in ?https://github.com/SJTU-ViSYS/StructDepth

Link:?http://arxiv.org/abs/2108.08574

Abstract

Self-supervised monocular depth estimation has achieved impressiveperformance on outdoor datasets. Its performance however degrades notably inindoor environments because of the lack of textures. Without rich textures, thephotometric consistency is too weak to train a good depth network. Inspired bythe early works on indoor modeling, we leverage the structural regularitiesexhibited in indoor scenes, to train a better depth network. Specifically, weadopt two extra supervisory signals for self-supervised training: 1) theManhattan normal constraint and 2) the co-planar constraint. The Manhattannormal constraint enforces the major surfaces (the floor, ceiling, and walls)to be aligned with dominant directions. The co-planar constraint states thatthe 3D points be well fitted by a plane if they are located within the sameplanar region. To generate the supervisory signals, we adopt two components toclassify the major surface normal into dominant directions and detect theplanar regions on the fly during training. As the predicted depth becomes moreaccurate after more training epochs, the supervisory signals also improve andin turn feedback to obtain a better depth model. Through extensive experimentson indoor benchmark datasets, the results show that our network outperforms thestate-of-the-art methods. The source code is available athttps://github.com/SJTU-ViSYS/StructDepth .

DECA: Deep viewpoint-Equivariant human pose estimation using Capsule Autoencoders

Comment: International Conference on Computer Vision 2021 (ICCV 2021), 8 ?pages, 4 figures, 4 tables, accepted for ICCV 2021 oral

Link:?http://arxiv.org/abs/2108.08557

Abstract

Human Pose Estimation (HPE) aims at retrieving the 3D position of humanjoints from images or videos. We show that current 3D HPE methods suffer a lackof viewpoint equivariance, namely they tend to fail or perform poorly whendealing with viewpoints unseen at training time. Deep learning methods oftenrely on either scale-invariant, translation-invariant, or rotation-invariantoperations, such as max-pooling. However, the adoption of such procedures doesnot necessarily improve viewpoint generalization, rather leading to moredata-dependent methods. To tackle this issue, we propose a novel capsuleautoencoder network with fast Variational Bayes capsule routing, named DECA. Bymodeling each joint as a capsule entity, combined with the routing algorithm,our approach can preserve the joints' hierarchical and geometrical structure inthe feature space, independently from the viewpoint. By achieving viewpointequivariance, we drastically reduce the network data dependency at trainingtime, resulting in an improved ability to generalize for unseen viewpoints. Inthe experimental validation, we outperform other methods on depth images fromboth seen and unseen viewpoints, both top-view, and front-view. In the RGBdomain, the same network gives state-of-the-art results on the challengingviewpoint transfer task, also establishing a new framework for top-view HPE.The code can be found at https://github.com/mmlab-cv/DECA.

A Unified Objective for Novel Class Discovery

Comment: ICCV 2021 (Oral)

Link:?http://arxiv.org/abs/2108.08536

Abstract

In this paper, we study the problem of Novel Class Discovery (NCD). NCD aimsat inferring novel object categories in an unlabeled set by leveraging fromprior knowledge of a labeled set containing different, but related classes.Existing approaches tackle this problem by considering multiple objectivefunctions, usually involving specialized loss terms for the labeled and theunlabeled samples respectively, and often requiring auxiliary regularizationterms. In this paper, we depart from this traditional scheme and introduce aUNified Objective function (UNO) for discovering novel classes, with theexplicit purpose of favoring synergy between supervised and unsupervisedlearning. Using a multi-view self-labeling strategy, we generate pseudo-labelsthat can be treated homogeneously with ground truth labels. This leads to asingle classification objective operating on both known and unknown classes.Despite its simplicity, UNO outperforms the state of the art by a significantmargin on several benchmarks (~+10% on CIFAR-100 and +8% on ImageNet). Theproject page is available at: \url{https://ncd-uno.github.io}.

Understanding and Mitigating Annotation Bias in Facial Expression Recognition

Comment: To appear in ICCV 2021

Link:?http://arxiv.org/abs/2108.08504

Abstract

The performance of a computer vision model depends on the size and quality ofits training data. Recent studies have unveiled previously-unknown compositionbiases in common image datasets which then lead to skewed model outputs, andhave proposed methods to mitigate these biases. However, most existing worksassume that human-generated annotations can be considered gold-standard andunbiased. In this paper, we reveal that this assumption can be problematic, andthat special care should be taken to prevent models from learning suchannotation biases. We focus on facial expression recognition and compare thelabel biases between lab-controlled and in-the-wild datasets. We demonstratethat many expression datasets contain significant annotation biases betweengenders, especially when it comes to the happy and angry expressions, and thattraditional methods cannot fully mitigate such biases in trained models. Toremove expression annotation bias, we propose an AU-Calibrated FacialExpression Recognition (AUC-FER) framework that utilizes facial action units(AUs) and incorporates the triplet loss into the objective function.Experimental results suggest that the proposed method is more effective inremoving expression annotation bias than existing techniques.

Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain

Comment: ICCV 2021

Link:?http://arxiv.org/abs/2108.08487

Abstract

Recently, the generalization behavior of Convolutional Neural Networks (CNN)is gradually transparent through explanation techniques with the frequencycomponents decomposition. However, the importance of the phase spectrum of theimage for a robust vision system is still ignored. In this paper, we noticethat the CNN tends to converge at the local optimum which is closely related tothe high-frequency components of the training images, while the amplitudespectrum is easily disturbed such as noises or common corruptions. In contrast,more empirical studies found that humans rely on more phase components toachieve robust recognition. This observation leads to more explanations of theCNN's generalization behaviors in both robustness to common perturbations andout-of-distribution detection, and motivates a new perspective on dataaugmentation designed by re-combing the phase spectrum of the current image andthe amplitude spectrum of the distracter image. That is, the generated samplesforce the CNN to pay more attention to the structured information from phasecomponents and keep robust to the variation of the amplitude. Experiments onseveral image datasets indicate that the proposed method achievesstate-of-the-art performances on multiple generalizations and calibrationtasks, including adaptability for common corruptions and surface variations,out-of-distribution detection, and adversarial attack.

Learning Anchored Unsigned Distance Functions with Gradient Direction Alignment for Single-view Garment Reconstruction

Comment: ICCV 2021

Link:?http://arxiv.org/abs/2108.08478

Abstract

While single-view 3D reconstruction has made significant progress benefitingfrom deep shape representations in recent years, garment reconstruction isstill not solved well due to open surfaces, diverse topologies and complexgeometric details. In this paper, we propose a novel learnable AnchoredUnsigned Distance Function (AnchorUDF) representation for 3D garmentreconstruction from a single image. AnchorUDF represents 3D shapes bypredicting unsigned distance fields (UDFs) to enable open garment surfacemodeling at arbitrary resolution. To capture diverse garment topologies,AnchorUDF not only computes pixel-aligned local image features of query points,but also leverages a set of anchor points located around the surface to enrich3D position features for query points, which provides stronger 3D space contextfor the distance function. Furthermore, in order to obtain more accurate pointprojection direction at inference, we explicitly align the spatial gradientdirection of AnchorUDF with the ground-truth direction to the surface duringtraining. Extensive experiments on two public 3D garment datasets, i.e., MGNand Deep Fashion3D, demonstrate that AnchorUDF achieves the state-of-the-artperformance on single-view garment reconstruction.

Medical Image Segmentation using 3D Convolutional Neural Networks: A Review

Comment: 17 pages, 4 figures

Link:?http://arxiv.org/abs/2108.08467

Abstract

Computer-aided medical image analysis plays a significant role in assistingmedical practitioners for expert clinical diagnosis and deciding the optimaltreatment plan. At present, convolutional neural networks (CNN) are thepreferred choice for medical image analysis. In addition, with the rapidadvancements in three-dimensional (3D) imaging systems and the availability ofexcellent hardware and software support to process large volumes of data, 3Ddeep learning methods are gaining popularity in medical image analysis. Here,we present an extensive review of the recently evolved 3D deep learning methodsin medical image segmentation. Furthermore, the research gaps and futuredirections in 3D medical image segmentation are discussed.

Self-Supervised Video Representation Learning with Meta-Contrastive Network

Comment: Accepted to ICCV 2021

Link:?http://arxiv.org/abs/2108.08426

Abstract

Self-supervised learning has been successfully applied to pre-train videorepresentations, which aims at efficient adaptation from pre-training domain todownstream tasks. Existing approaches merely leverage contrastive loss to learninstance-level discrimination. However, lack of category information will leadto hard-positive problem that constrains the generalization ability of thiskind of methods. We find that the multi-task process of meta learning canprovide a solution to this problem. In this paper, we propose aMeta-Contrastive Network (MCN), which combines the contrastive learning andmeta learning, to enhance the learning ability of existing self-supervisedapproaches. Our method contains two training stages based on model-agnosticmeta learning (MAML), each of which consists of a contrastive branch and a metabranch. Extensive evaluations demonstrate the effectiveness of our method. Fortwo downstream tasks, i.e., video action recognition and video retrieval, MCNoutperforms state-of-the-art approaches on UCF101 and HMDB51 datasets. To bemore specific, with R(2+1)D backbone, MCN achieves Top-1 accuracies of 84.8%and 54.5% for video action recognition, as well as 52.5% and 23.7% for videoretrieval.

Generating Smooth Pose Sequences for Diverse Human Motion Prediction

Comment: ICCV21(oral)

Link:?http://arxiv.org/abs/2108.08422

Abstract

Recent progress in stochastic motion prediction, i.e., predicting multiplepossible future human motions given a single past pose sequence, has led toproducing truly diverse future motions and even providing control over themotion of some body parts. However, to achieve this, the state-of-the-artmethod requires learning several mappings for diversity and a dedicated modelfor controllable motion prediction. In this paper, we introduce a unified deepgenerative network for both diverse and controllable motion prediction. To thisend, we leverage the intuition that realistic human motions consist of smoothsequences of valid poses, and that, given limited data, learning a pose prioris much more tractable than a motion one. We therefore design a generator thatpredicts the motion of different body parts sequentially, and introduce anormalizing flow based pose prior, together with a joint angle loss, to achievemotion realism.Our experiments on two standard benchmark datasets, Human3.6Mand HumanEva-I, demonstrate that our approach outperforms the state-of-the-artbaselines in terms of both sample diversity and accuracy. The code is availableat https://github.com/wei-mao-2019/gsps

Exploiting Multi-Object Relationships for Detecting Adversarial Attacks in Complex Scenes

Comment: ICCV'21 Accepted

Link:?http://arxiv.org/abs/2108.08421

Abstract

Vision systems that deploy Deep Neural Networks (DNNs) are known to bevulnerable to adversarial examples. Recent research has shown that checking theintrinsic consistencies in the input data is a promising way to detectadversarial attacks (e.g., by checking the object co-occurrence relationshipsin complex scenes). However, existing approaches are tied to specific modelsand do not offer generalizability. Motivated by the observation that languagedescriptions of natural scene images have already captured the objectco-occurrence relationships that can be learned by a language model, we developa novel approach to perform context consistency checks using such languagemodels. The distinguishing aspect of our approach is that it is independent ofthe deployed object detector and yet offers very high accuracy in terms ofdetecting adversarial examples in practical scenes with multiple objects.

Provable Benefits of Actor-Critic Methods for Offline Reinforcement Learning

Comment: Initial submission; appeared as spotlight talk in ICML 2021 Workshop ?on Theory of RL

Link:?http://arxiv.org/abs/2108.08812

Abstract

Actor-critic methods are widely used in offline reinforcement learningpractice, but are not so well-understood theoretically. We propose a newoffline actor-critic algorithm that naturally incorporates the pessimismprinciple, leading to several key advantages compared to the state of the art.The algorithm can operate when the Bellman evaluation operator is closed withrespect to the action value function of the actor's policies; this is a moregeneral setting than the low-rank MDP model. Despite the added generality, theprocedure is computationally tractable as it involves the solution of asequence of second-order programs. We prove an upper bound on the suboptimalitygap of the policy returned by the procedure that depends on the data coverageof any arbitrary, possibly data dependent comparator policy. The achievableguarantee is complemented with a minimax lower bound that is matching up tologarithmic factors.

·

總結(jié)

以上是生活随笔為你收集整理的今日arXiv精选 | 34篇顶会论文:CIKM/ ACL/ Interspeech/ ICCV/ ACM MM的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。

国产在线最新 | 91色在线观看视频 | 久久爱www. | 国产一区二区免费在线观看 | av三级在线看 | 天天狠狠干 | 欧美日韩久久一区 | 高清一区二区 | 网站在线观看你们懂的 | 999电影免费在线观看2020 | 色婷婷狠狠干 | 国产免费久久av | 亚洲一级片在线观看 | 中文字幕在线观看免费高清完整版 | 四虎在线视频免费观看 | 久久精品免费播放 | 精品国产不卡 | 成人永久视频 | 国产五十路毛片 | 欧美做受69 | 玖玖玖在线观看 | 亚洲女在线| 欧美日韩免费观看一区=区三区 | 久久综合色婷婷 | 曰本免费av | 不卡国产视频 | 涩av在线| 天天插狠狠干 | 不卡视频在线看 | 久草精品视频在线播放 | 婷婷在线网 | 中文字幕av最新更新 | 国产精品九九九九九 | 久久这里精品视频 | 91免费版成人 | 91在线中文| 久久中文字幕导航 | 欧美日韩一级在线 | 久草视频99 | 成年人免费看片网站 | 久香蕉| 91香蕉国产 | 国产精品久久久久国产a级 激情综合中文娱乐网 | 日本久久久久久久久久久 | 五月婷婷黄色 | 欧美视频在线观看免费网址 | 国产精品欧美在线 | 精品久久久999 | 久久黄色网| 男女激情麻豆 | 色94色欧美 | 激情在线网| 午夜精品久久久久久久久久 | 国产黄色在线观看 | 欧美一级看片 | 欧美成人性网 | 91麻豆精品国产午夜天堂 | 黄色www免费| 成人黄色大片在线观看 | 在线天堂中文www视软件 | 亚洲aⅴ在线 | 久久午夜电影网 | 国产高清av | 免费网站观看www在线观看 | 欧美日韩破处 | 欧美日韩一区二区视频在线观看 | 狠狠色丁香婷婷综合视频 | 美女国产免费 | www.91av在线 | 久久欧美在线电影 | 一区二区三区污 | 久久久久久久久久久久99 | 亚洲精品乱码久久久久久蜜桃91 | 毛片视频电影 | 中文字幕精品视频 | 久久99国产精品 | 日韩羞羞| 久久免费国产精品1 | 国产分类视频 | 黄色福利网站 | 91精品爽啪蜜夜国产在线播放 | 国产伦理一区二区 | 特级西西人体444是什么意思 | 亚洲激精日韩激精欧美精品 | 国产精品12345| 人人爱天天操 | 日韩久久在线 | 日日夜夜网站 | 成年美女黄网站色大片免费看 | 国产粉嫩在线观看 | 人人盈棋牌 | 国产69精品久久久久久久久久 | 毛片一区二区 | 黄色三级在线看 | 国产精品美女网站 | 99看视频在线观看 | 日韩大片免费在线观看 | 99久视频| 欧美成年网站 | 国产精品综合在线观看 | 久久情爱 | 久草99 | 丁香五香天综合情 | 夜夜干天天操 | 欧美激情综合色综合啪啪五月 | 国产亚洲一级高清 | 六月色丁 | 免费成人黄色 | 国产精品高潮呻吟久久久久 | 不卡视频在线 | 一区二区不卡高清 | 中文字幕超清在线免费 | 国产一级特黄毛片在线毛片 | 天天干天天拍天天操天天拍 | 成年人在线免费看 | 日韩三级成人 | 激情偷乱人伦小说视频在线观看 | 91免费试看 | 色综合天天天天做夜夜夜夜做 | 精品一区二区影视 | 亚洲国产成人久久 | 欧美日韩国产三级 | 国产品久精国精产拍 | 久久免费av | 午夜私人影院久久久久 | 人人爽夜夜爽 | 日本韩国精品一区二区在线观看 | 亚洲成a人片综合在线 | 97视频资源| 男女精品久久 | 日韩毛片在线免费观看 | 日本公妇色中文字幕 | 99av国产精品欲麻豆 | 国产又黄又硬又爽 | 91精品国产欧美一区二区 | 日本精品视频免费 | 精品亚洲男同gayvideo网站 | 欧美精品在线一区二区 | 午夜精品导航 | 91视频麻豆| 久久亚洲福利视频 | 国产在线观看污片 | 91av电影网 | 97精品在线观看 | 偷拍久久久| 国内一级片在线观看 | 99精品视频在线播放观看 | 999视频在线播放 | 99在线视频免费观看 | 久久久精品 一区二区三区 国产99视频在线观看 | 91手机视频在线 | 国产精品成人在线观看 | 中文字幕高清av | 在线视频精品播放 | 2023亚洲精品国偷拍自产在线 | 欧美动漫一区二区三区 | 日韩一区二区三区在线观看 | 天天干一干 | 日本黄色大片儿 | 成人黄色免费观看 | 天天超碰| 免费观看一区二区 | 中文字幕 成人 | 色视频网站在线 | 亚洲国产精品激情在线观看 | 日本不卡一区二区 | 成人小视频在线观看免费 | 亚洲精选国产 | 91| 亚洲国产精品99久久久久久久久 | 日韩中文字幕电影 | 国产资源中文字幕 | 日韩三级.com | 久久精品老司机 | 免费在线看v| 日韩xxx视频 | 久久久电影 | 伊人网av| 日韩av在线一区二区 | 狠狠色丁香婷婷综合久小说久 | 免费网站在线观看成人 | 久久99亚洲网美利坚合众国 | 91精品国自产在线观看 | 午夜av免费在线观看 | 久久人人爽人人 | 日韩电影在线观看中文字幕 | 国产成人一区二区啪在线观看 | 午夜黄色大片 | 97人人模人人爽人人喊网 | 免费国产在线精品 | 日韩中文字幕第一页 | 亚洲精品在线国产 | 国产黄在线 | 天天干干 | 久久久久亚洲精品中文字幕 | 在线a人v观看视频 | 免费在线观看av网站 | 精品国产一区二区三区男人吃奶 | 欧美日韩国产mv | av一区二区三区在线播放 | 国产69精品久久99的直播节目 | av中文在线 | se视频网址| 久久嗨| 成人在线观看资源 | japanese黑人亚洲人4k | 亚洲动漫在线观看 | 911香蕉视频| 91久久在线观看 | 色999五月色 | a级国产乱理伦片在线播放 久久久久国产精品一区 | www中文在线 | 日韩av高清在线观看 | 久久视频免费看 | 免费观看视频的网站 | 在线免费试看 | 久久精品视频在线 | 天天艹 | 天天操天天添天天吹 | 欧美最猛性xxxx | 91最新在线 | 日韩欧美一区二区三区免费观看 | 天堂久色 | 天天操天天插 | 天天夜操 | 亚欧日韩av | 在线免费av网站 | 最新亚洲视频 | 99在线精品视频 | 国产第一页精品 | 99久久精品久久亚洲精品 | 天天操,夜夜操 | 欧美国产大片 | 一区二区三区在线看 | 日本精品视频在线观看 | 国产日产精品一区二区三区四区 | av电影 一区二区 | 麻豆视频免费在线观看 | 手机在线观看国产精品 | 亚洲在线a | 中文字幕高清免费日韩视频在线 | 日日碰狠狠躁久久躁综合网 | 亚洲最大av在线播放 | 天天综合网天天 | 国产精品一区二区62 | 91精品久久久久久久久 | 精品毛片在线 | 91少妇精拍在线播放 | 成人在线免费看视频 | 99免费看片 | 精品福利视频在线观看 | 天天干天天插 | 亚洲一区二区视频在线 | av在线网站免费观看 | 超碰公开97 | 美女视频又黄又免费 | 91九色在线视频 | 久久新视频 | 久久这里 | a色网站| 国产一级大片在线观看 | 日本三级不卡 | 亚洲精品久久久久中文字幕二区 | 成年人视频在线免费观看 | 久久久久免费精品国产小说色大师 | 天天操天天干天天 | 五月婷婷综合久久 | 国产精品久久久久久久久久 | 国产最新视频在线观看 | 在线观看成人网 | 亚洲国产激情 | 五月天婷婷丁香花 | 国产午夜三级一区二区三桃花影视 | 91麻豆福利| 99视频在线免费 | 亚洲视频免费在线看 | 国产精品永久久久久久久久久 | 精品在线一区二区三区 | 91精品在线免费观看视频 | 在线黄av | 天天操综合网站 | 摸bbb搡bbb搡bbbb | 你操综合 | 亚洲精品在线免费观看视频 | 日韩中文字幕视频在线 | 精品久久久久一区二区国产 | 69精品视频在线观看 | 成人午夜电影在线 | 国产精品一区二区视频 | 九九色综合 | 国产99久久久国产精品成人免费 | 亚洲二区精品 | 国产一级片视频 | 超碰成人网 | 中文字幕超清在线免费 | 99久久www免费 | 99精品免费久久久久久日本 | 亚洲九九爱| 国产视频91在线 | 免费在线精品视频 | www在线观看国产 | 91亚洲狠狠婷婷综合久久久 | 丁香久久久 | 久久伦理 | 久久久午夜电影 | www在线观看视频 | 国产福利一区二区三区在线观看 | 黄污视频网站大全 | 日本黄色免费在线 | 9热精品 | 黄影院| 成人超碰在线 | 日本三级国产 | 亚洲精品乱码久久久久久蜜桃动漫 | 久久视频在线看 | 亚洲成人资源网 | 精品999在线观看 | 一区二区三区高清 | 久草在线99 | 日韩国产欧美在线视频 | 91九色精品女同系列 | 色视频在线观看免费 | 天天摸天天干天天操天天射 | 狠狠干婷婷 | 在线看片视频 | 狠狠色丁香婷综合久久 | 久久99国产精品久久99 | 成人免费视频播放 | 99在线精品视频观看 | av网站手机在线观看 | 99久久精品免费看 | 久久久久久蜜桃一区二区 | 中文字幕在线观看完整版电影 | 天天色官网 | 国产人成在线视频 | 狠狠色丁香 | 99精品国产成人一区二区 | 9999精品 | 黄色不卡av| 免费在线观看不卡av | 91精品国产亚洲 | 亚洲精区二区三区四区麻豆 | 日韩免费一二三区 | 黄色大片视频网站 | 亚洲美女免费精品视频在线观看 | 日b视频国产 | 一区二区三区四区五区在线 | 久久久综合电影 | 亚洲成av人片在线观看香蕉 | 视频一区二区在线观看 | 国产精品久久久久久99 | 亚洲国产经典视频 | 久久99操| 九九久久久久久久久激情 | 在线看的毛片 | 九七视频在线观看 | 麻豆视频在线免费观看 | 精品视频久久 | 一级免费看视频 | 国产视频日韩视频欧美视频 | 色999精品| 中文字幕在线网 | 91精品麻豆 | 久草在线中文888 | 99在线热播精品免费 | 久久99久久99精品免观看软件 | 日韩中文字幕免费视频 | 91av视频播放 | 顶级欧美色妇4khd | 日本三级久久 | 人人澡人人爽欧一区 | 国产精品免费一区二区三区在线观看 | 欧美在线日韩在线 | 日本视频不卡 | 天天射天天操天天色 | 国产一区二区在线影院 | 天天插视频 | 人人爽人人看 | 亚洲资源视频 | 一级一片免费视频 | 欧美日韩国产一区二区三区 | 一区二区三区韩国免费中文网站 | 欧美综合干 | 国产日韩视频在线观看 | 96亚洲精品久久久蜜桃 | 人人艹视频 | 麻豆国产精品永久免费视频 | 欧美国产91 | 日本韩国精品一区二区在线观看 | 色九九影院 | 国产精品久久久久久久免费大片 | 欧美日韩高清 | 在线亚洲播放 | 亚洲综合在线五月 | 精品一区二区三区久久久 | 久久久久久久久久久国产精品 | 国产精品久久久久免费a∨ 欧美一级性生活片 | 中文字幕专区高清在线观看 | 国产美女在线观看 | 一区二区三区日韩在线 | 天天插天天干 | 99久久精品免费看国产一区二区三区 | 九九久久久久99精品 | 天天插天天爱 | 国产精品久久久影视 | 娇妻呻吟一区二区三区 | 美女福利视频网 | 国产精品成人aaaaa网站 | 日日爽日日操 | 尤物一区二区三区 | 亚洲日本一区二区在线 | 国产视频999 | 小草av在线播放 | 久久久久成人精品免费播放动漫 | 色综合天天色 | 黄网站免费看 | 日韩av成人 | 久久大片| 日韩电影中文 | 日韩av电影一区 | 亚洲精品在线资源 | 天天射天天射天天 | 久久影视精品 | 欧美a级在线 | 亚洲精品在线免费看 | 97碰碰视频 | 久久亚洲福利视频 | 丁香六月激情婷婷 | 99久久精品日本一区二区免费 | 天天综合网久久 | 人人舔人人爱 | 天天草天天干天天 | 国产日韩精品在线 | 国产精品1024 | 综合激情网 | 久久免费成人精品视频 | 狠狠的干狠狠的操 | 欧美精品v国产精品v日韩精品 | 国产精品 美女 | 国内精品毛片 | 99国内精品久久久久久久 | 在线亚洲精品 | 韩日精品在线观看 | 不卡在线一区 | 精品视频不卡 | 91刺激视频 | 99久久婷婷 | 日日综合网 | 成人av电影在线播放 | 亚洲精品玖玖玖av在线看 | 国产精品一区二区久久精品 | 成人免费视频网 | 亚洲美女在线一区 | 日韩美在线观看 | 超碰人人在线 | 日韩免费小视频 | 亚洲 精品在线视频 | 97超碰福利久久精品 | 亚洲成av | 国产精品理论片在线播放 | 成人小视频在线观看免费 | 久久久久久免费视频 | 国内外激情视频 | 91在线精品观看 | 曰本免费av| 久射网| 国产精品扒开做爽爽的视频 | 日韩欧美电影在线观看 | 久久不卡国产精品一区二区 | 国产黄色av影视 | 天天爱天天色 | 欧美日韩在线观看视频 | 久久久久久影视 | 香蕉视频在线网站 | 亚洲自拍av在线 | 高潮久久久久久久久 | 中文有码在线视频 | 亚洲国产精品视频在线观看 | 亚洲精品乱码白浆高清久久久久久 | 免费高清男女打扑克视频 | 欧美日韩国产精品久久 | www.干| www91在线观看| 国产一级不卡视频 | 中文字幕国产视频 | 青春草视频在线播放 | 国产在线观看免 | 久久精品一 | 国产精品99久久久久久宅男 | 99久久久国产精品美女 | 色狠狠狠 | 久久天堂网站 | 欧美精品中文 | 中文字幕永久 | 欧美日韩国产区 | 久久久国产一区二区三区四区小说 | 99999精品 | 最新精品国产 | 精品视频97| 六月丁香社区 | 色 免费观看 | 伊人色综合久久天天网 | 久久久免费观看完整版 | 国产一区二区三区四区在线 | av网站有哪些 | 成人精品影视 | 99热这里只有精品8 久久综合毛片 | 91在线精品秘密一区二区 | 草在线视频| 免费看黄网站在线 | 91久久影院| 免费h精品视频在线播放 | 精品久久久99 | 国产午夜三级 | 亚洲精品在线视频网站 | 99精品久久精品一区二区 | 91网站在线视频 | 超碰在线中文字幕 | 99久久精品免费看国产免费软件 | 亚洲国产精品成人精品 | 欧美最新另类人妖 | 日韩高清免费无专码区 | 亚洲精品高清一区二区三区四区 | 中文字幕资源网在线观看 | 久久99精品久久只有精品 | 在线一级片 | 日韩免费视频观看 | 日韩在线无 | 久99久精品视频免费观看 | 国产手机视频在线 | 国产精品久久久久久久久久了 | 久久不卡国产精品一区二区 | 久久国产二区 | 国产中文字幕一区 | 九九久久精品视频 | 成人av片免费观看app下载 | 亚州av网站 | 天天干天天干天天 | 亚洲欧美国产精品久久久久 | 成人影音av | 999久久a精品合区久久久 | 国产精品美女久久久久久2018 | 久草久草在线 | 爱爱av网站| 久久综合在线 | 欧美日韩免费观看一区=区三区 | 国产99在线播放 | 99视频在线精品免费观看2 | 91精品视频免费看 | 天天操福利视频 | 在线韩国电影免费观影完整版 | 日韩精品一区二区三区电影 | 国产精品久久久久久久久久久不卡 | www.久久视频| a色视频 | 欧美aⅴ在线观看 | 欧美日韩在线电影 | 高清不卡一区二区三区 | 久久精品99久久久久久 | 欧美福利片在线观看 | av亚洲产国偷v产偷v自拍小说 | 在线播放精品一区二区三区 | 亚洲精品中文在线 | 日韩电影中文字幕在线观看 | 国产在线资源 | 激情五月六月婷婷 | 日韩精品不卡 | 天天操福利视频 | 麻豆影视网站 | 久久久久久国产精品免费 | a级国产毛片 | 中文字幕永久在线 | 免费看黄在线网站 | 国产自产高清不卡 | 97超在线| 88av视频| 亚洲年轻女教师毛茸茸 | 久久艹影院 | 欧美国产一区二区 | 伊人久久国产 | 日韩免费福利 | 日韩精品在线观看av | 天干啦夜天干天干在线线 | 99久久婷婷国产一区二区三区 | 在线播放视频一区 | 69国产盗摄一区二区三区五区 | 国产伦精品一区二区三区四区视频 | 久久精品亚洲国产 | 国产精品免费久久久久影院仙踪林 | 日韩精品一区二区三区不卡 | 亚洲高清视频一区二区三区 | 国产永久免费观看 | 欧美美女激情18p | 丁香婷婷综合激情 | 亚洲精品国偷拍自产在线观看蜜桃 | 日韩在线色 | 四虎成人网 | 亚洲波多野结衣 | 黄色精品一区二区 | 操综合 | 在线一区av| 涩涩爱夜夜爱 | 日韩在线视频免费播放 | 国产精品久久久亚洲 | 久久久久一区二区三区四区 | 欧美一级免费黄色片 | 天天色天天干天天色 | 精品免费久久久久久 | 中文字幕第一 | 伊色综合久久之综合久久 | 国产资源中文字幕 | 欧美色图30p| 一区二区三区免费在线观看视频 | 91综合久久一区二区 | 小草av在线播放 | 欧美一级特黄aaaaaa大片在线观看 | 国偷自产中文字幕亚洲手机在线 | 午夜精品在线看 | 亚洲精品高清视频 | 日日操操 | 国产一级一片免费播放放a 一区二区三区国产欧美 | 亚洲国产欧美在线看片xxoo | 久久久久激情电影 | 中文字幕乱码亚洲精品一区 | 亚洲高清视频在线 | 国内一区二区视频 | 天天综合网天天综合色 | www黄在线 | 久久综合狠狠综合久久狠狠色综合 | av电影免费 | 欧洲在线免费视频 | 久久综合精品一区 | 欧美韩日在线 | 久久96国产精品久久99软件 | 黄色网址av | 日本高清dvd| 毛片永久免费 | 日韩中文字幕免费在线观看 | 久久精品之 | 在线看一区 | 在线精品视频在线观看高清 | 国产99久久久久 | 免费人成在线观看 | 国产精品99免视看9 国产精品毛片一区视频 | 久久99国产精品二区护士 | 午夜精品久久久久久 | 久久精品福利视频 | 99热99re6国产在线播放 | 午夜av影院| 欧美一区二区在线免费看 | 亚州激情视频 | 欧美精品国产综合久久 | 婷婷黄色片 | 特级西西人体444是什么意思 | 99高清视频有精品视频 | 日日操日日| 久久不射网站 | 精品视频久久 | 视频在线观看入口黄最新永久免费国产 | 国产视频91在线 | 国产伦精品一区二区三区免费 | 国产亚洲精品久久久久久久久久 | av电影 一区二区 | 777xxx欧美 | 婷婷丁香激情 | 日韩精品一区二区三区外面 | 香蕉在线观看 | 五月天综合网站 | 日本护士三级少妇三级999 | 日日夜夜人人精品 | 精品国产观看 | 91av视频| 99视频精品免费视频 | 亚洲成色777777在线观看影院 | 免费在线色视频 | 婷婷日日 | 国产精品69av| 国产一区二区三区久久久 | 久久精品久久综合 | 一区二区三区免费在线 | 国产中文字幕国产 | 欧洲精品码一区二区三区免费看 | 西西www4444大胆在线 | 91完整版在线观看 | 国产成人一级电影 | 国产美女免费看 | 99视频在线观看视频 | 在线电影av | 亚洲天天综合 | 97色综合| 91精品网站在线观看 | 亚洲永久精品国产 | 日日摸日日碰 | 96视频在线 | 午夜精品久久久久久久久久久久 | av免费在线网站 | 夜夜操狠狠干 | 国产国产人免费人成免费视频 | 美女久久一区 | 麻豆传媒视频在线免费观看 | 伊人一级 | 婷婷 中文字幕 | 视频在线99re | 99热都是精品 | 九九九国产 | 成人一区二区三区中文字幕 | 久久久高清视频 | 久久久国产精品麻豆 | 久久tv | 免费观看国产视频 | 精品国产一区二区三区久久久蜜月 | 激情综合狠狠 | 成人在线免费小视频 | 综合精品在线 | 午夜久久久久久久久久影院 | 亚洲h视频在线 | 日韩在线视频一区 | 亚洲网站在线 | 最新国产精品久久精品 | 国产精品嫩草影院123 | 久久久精品欧美一区二区免费 | 99色视频在线 | 99婷婷狠狠成为人免费视频 | 久久久久免费精品视频 | 亚洲最大免费成人网 | 中文字幕网站视频在线 | 91久久精品一区 | 亚洲激情国产精品 | 国产精品一区二区三区在线 | 久久小视频 | 国产白浆在线观看 | 天天艹天天| 国产精品久久久久久久久蜜臀 | 91精品啪在线观看国产81旧版 | 亚洲精品免费在线播放 | 中国精品少妇 | 成人久久久久久久久 | 亚洲国产经典视频 | 久草在线视频首页 | 美女福利视频一区二区 | 黄污污网站| 91最新视频在线观看 | 国产韩国日本高清视频 | 国产精品激情偷乱一区二区∴ | 97国产一区 | av免费观看高清 | 国产中文字幕视频在线观看 | 天天操天天爱天天干 | 天天精品视频 | 成人午夜网 | 国产精品99免视看9 国产精品毛片一区视频 | 日韩在线字幕 | 久青草电影 | 精品国产一区二区三区蜜臀 | 99精品久久只有精品 | 日日干狠狠操 | 91精品一区在线观看 | 免费日韩一区二区三区 | av网站在线观看播放 | 国产精品九九视频 | 精品中文字幕在线观看 | 国产精品亚洲视频 | 2020天天干夜夜爽 | 在线久久 | 在线视频91| 激情久久久久久久久久久久久久久久 | 国产精品久久久久久久久久直播 | 在线激情影院一区 | 国内精品久久久久久久久久清纯 | 在线国产精品视频 | 欧美在线你懂的 | 欧美日本中文字幕 | 伊人中文在线 | 精品在线你懂的 | 日本激情视频中文字幕 | 在线高清av| 中文字幕一区二区三区视频 | 99久热在线精品视频成人一区 | 久久久久久久久久国产精品 | 国产精品久久久久久久久久久久冷 | av免费在线看网站 | 国产精品欧美日韩在线观看 | 国产亚洲精品久久久久久网站 | 亚洲精品中文字幕视频 | 成人黄在线观看 | 久久久高清免费视频 | 在线观看国产高清视频 | 国产香蕉久久精品综合网 | 亚洲美女在线国产 | 国产精品久久久久毛片大屁完整版 | 在线观看日韩免费视频 | 中文在线免费一区三区 | 免费观看黄| 欧美在线99| 国产成人精品在线观看 | 日韩大片免费在线观看 | 这里有精品在线视频 | 亚洲精品欧美视频 | 91免费视频网站在线观看 | 中文字幕在线第一页 | 又黄又爽又刺激视频 | 久久久久久高清 | 97超碰免费 | 日操操| 欧美少妇的秘密 | 日韩欧美一区二区三区视频 | 91网址在线看| 1024在线看片 | 久久久久久高潮国产精品视 | 中文字幕4| 日本黄色免费在线 | 国产高清福利在线 | 三级黄色大片在线观看 | 超碰人人做| 亚洲精品免费看 | 96久久精品 | 韩国精品福利一区二区三区 | 激情五月婷婷丁香 | 96视频免费在线观看 | 波多野结衣视频在线 | 97超碰人人爱 | 国产女v资源在线观看 | 欧美激情视频一区二区三区免费 | 91人人澡人人爽 | 人人爽人人爽人人片 | 97超碰资源站 | 91色网址| 国产福利91精品一区 | 91精品国产99久久久久久红楼 | 91av99| 日韩欧美视频在线播放 | 2022久久国产露脸精品国产 | 日韩在线影视 | 狠狠操操 | 国产精品自产拍在线观看中文 | 欧美日韩国产一区 | 国产伦精品一区二区三区免费 | 欧美日韩不卡在线视频 | 最近日本韩国中文字幕 | www.久久婷婷 | 日日干精品 | 亚洲精品乱码白浆高清久久久久久 | www在线观看视频 | 国产不卡精品视频 | 亚洲欧美视屏 | 91精品一区二区三区蜜桃 | 伊人电影在线观看 | 波多野结衣综合网 | 久久视频免费观看 | 亚洲精品午夜久久久久久久 | 一区二区电影网 | 亚洲成人中文在线 | 九九免费在线观看 | a黄色大片 | 国内精品亚洲 | 天天干天天玩天天操 | 91自拍视频在线 | 国产裸体无遮挡 | 亚洲精品视频久久 | 九九99靖品| 国产精品99久久久久的智能播放 | 91免费在线视频 | 免费看黄网站在线 | 国产夫妻性生活自拍 | 黄色大片入口 | 毛片3| 五月婷婷激情综合 | 91九色视频在线观看 | 日韩一三区| 久久97超碰 | 日韩中文久久 | 日韩视频www | 欧美 亚洲 另类 激情 另类 | 国产视频在线播放 | www.少妇| 黄色三级在线观看 | 美女视频黄频大全免费 | 亚洲精品免费视频 | 国内精品久久久久影院优 | 正在播放国产一区二区 | 一区二区三区日韩视频在线观看 | 亚洲91中文字幕无线码三区 | 亚洲一区欧美激情 | 欧美激情综合五月色丁香 | 久草com| 国产一级片免费观看 | 国产成人精品三级 | www.黄色片网站 | 国产精品不卡 | 91在线免费视频观看 | 二区精品视频 | 亚洲欧美久久 | 999国产在线 | 久久久免费网站 | 久久久wwww | 亚洲免费精品一区二区 | 精品99视频| 国产欧美在线一区二区三区 | 国内毛片毛片 | 九九久久久久99精品 | 国产精品99久久久久久久久 | 精品亚洲午夜久久久久91 | 天天摸天天舔 | 国产精品黄色在线观看 | 亚洲综合在线五月天 | 亚洲精品网站 | 99国内精品久久久久久久 | 一区二区视频网站 | 精品国产免费人成在线观看 | 成人app在线播放 | 91丨九色丨国产丨porny精品 | 中文字幕资源站 | 日韩有码网站 | 国产精品永久在线 | 日韩精品一区二区三区免费视频观看 | 国产在线不卡精品 | www.天天草| 成人精品99 | 国产高清绿奴videos | 高清久久久 | 精品国产一区二区三区男人吃奶 | 国产精品成人自拍 | 免费三级黄色 | 91九色在线视频观看 | 久久亚洲婷婷 | www.夜夜爽 | 欧洲一区精品 | 久久夜色精品国产欧美乱 | 日本动漫做毛片一区二区 | 国产高清在线a视频大全 | 欧美极品久久 | www.福利| 国产自产高清不卡 | 在线精品视频免费播放 | 91成人在线看 | 999电影免费在线观看 | www色com| 国产精品美女免费视频 | 欧美日韩国产页 | www.av免费 | 亚洲久草在线视频 | 射射色 | 日韩av片免费在线观看 | 美女视频黄在线观看 | 91大神一区二区三区 | 国产精品综合久久 | 开心激情网五月天 | 国产视频在线免费 | av九九九| 91丨九色丨蝌蚪丨对白 | 欧美作爱视频 | 最新国产在线观看 | av黄网站| 日韩精品影视 | 国产精品美女久久久久久久 | 日韩精品免费在线 | 亚洲久草视频 | 少妇bbb搡bbbb搡bbbb′ | 日韩亚洲国产精品 | 东方av免费在线观看 | 欧美疯狂性受xxxxx另类 | 黄色大片入口 | 色香蕉在线 | 麻豆国产视频下载 | 中文字幕在线一区二区三区 | 美女精品久久久 | 日韩精品一区二区三区免费视频观看 | 日韩精品一区二区三区电影 | 国产一级在线看 | 香蕉在线视频观看 | 日韩黄在线观看 | 又污又黄网站 | 中文av字幕在线观看 | 91精品国产综合久久福利不卡 | 久草网视频在线观看 | 国产成人三级在线播放 | 亚洲国产精品一区二区久久,亚洲午夜 | 开心激情五月网 | 日韩欧美精选 | 天天草天天操 | 特级xxxxx欧美 | 久久成人亚洲欧美电影 | 91亚洲在线观看 | 国产高清一级 | 国产精品久久久久久久7电影 | 五月婷婷黄色网 | 欧美午夜寂寞影院 | 在线视频久久 | 日本精品视频在线播放 | 欧美亚洲精品在线观看 | 国产韩国日本高清视频 | 日本精品在线视频 | 国产视频资源在线观看 | 婷婷国产v亚洲v欧美久久 | 色资源二区在线视频 | 亚洲欧美偷拍另类 | 99久久成人 | 一级黄毛片 |