日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人工智能 > ChatGpt >内容正文

ChatGpt

今日arXiv精选 | 29篇顶会论文:ACM MM/ ICCV/ CIKM/ AAAI/ IJCAI

發布時間:2024/10/8 ChatGpt 139 豆豆
生活随笔 收集整理的這篇文章主要介紹了 今日arXiv精选 | 29篇顶会论文:ACM MM/ ICCV/ CIKM/ AAAI/ IJCAI 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

?關于?#今日arXiv精選?

這是「AI 學術前沿」旗下的一檔欄目,編輯將每日從arXiv中精選高質量論文,推送給讀者。

Group-based Distinctive Image Captioning with Memory Attention

Comment: Accepted at ACM MM 2021 (oral)

Link:?http://arxiv.org/abs/2108.09151

Abstract

Describing images using natural language is widely known as image captioning,which has made consistent progress due to the development of computer visionand natural language generation techniques. Though conventional captioningmodels achieve high accuracy based on popular metrics, i.e., BLEU, CIDEr, andSPICE, the ability of captions to distinguish the target image from othersimilar images is under-explored. To generate distinctive captions, a fewpioneers employ contrastive learning or re-weighted the ground-truth captions,which focuses on one single input image. However, the relationships betweenobjects in a similar image group (e.g., items or properties within the samealbum or fine-grained events) are neglected. In this paper, we improve thedistinctiveness of image captions using a Group-based Distinctive CaptioningModel (GdisCap), which compares each image with other images in one similargroup and highlights the uniqueness of each image. In particular, we propose agroup-based memory attention (GMA) module, which stores object features thatare unique among the image group (i.e., with low similarity to objects in otherimages). These unique object features are highlighted when generating captions,resulting in more distinctive captions. Furthermore, the distinctive words inthe ground-truth captions are selected to supervise the language decoder andGMA. Finally, we propose a new evaluation metric, distinctive word rate(DisWordRate) to measure the distinctiveness of captions. Quantitative resultsindicate that the proposed method significantly improves the distinctiveness ofseveral baseline models, and achieves the state-of-the-art performance on bothaccuracy and distinctiveness. Results of a user study agree with thequantitative evaluation and demonstrate the rationality of the new metricDisWordRate.

Airbert: In-domain Pretraining for Vision-and-Language Navigation

Comment: To be published on ICCV 2021. Webpage is at ?https://airbert-vln.github.io/ linking to our dataset, codes and models

Link:?http://arxiv.org/abs/2108.09105

Abstract

Vision-and-language navigation (VLN) aims to enable embodied agents tonavigate in realistic environments using natural language instructions. Giventhe scarcity of domain-specific training data and the high diversity of imageand language inputs, the generalization of VLN agents to unseen environmentsremains challenging. Recent methods explore pretraining to improvegeneralization, however, the use of generic image-caption datasets or existingsmall-scale VLN environments is suboptimal and results in limited improvements.In this work, we introduce BnB, a large-scale and diverse in-domain VLNdataset. We first collect image-caption (IC) pairs from hundreds of thousandsof listings from online rental marketplaces. Using IC pairs we next proposeautomatic strategies to generate millions of VLN path-instruction (PI) pairs.We further propose a shuffling loss that improves the learning of temporalorder inside PI pairs. We use BnB pretrain our Airbert model that can beadapted to discriminative and generative settings and show that it outperformsstate of the art for Room-to-Room (R2R) navigation and Remote ReferringExpression (REVERIE) benchmarks. Moreover, our in-domain pretrainingsignificantly increases performance on a challenging few-shot VLN evaluation,where we train the model only on VLN instructions from a few houses.

GEDIT: Geographic-Enhanced and Dependency-Guided Tagging for Joint POI and Accessibility Extraction at Baidu Maps

Comment: Accepted by CIKM'21

Link:?http://arxiv.org/abs/2108.09104

Abstract

Providing timely accessibility reminders of a point-of-interest (POI) plays avital role in improving user satisfaction of finding places and making visitingdecisions. However, it is difficult to keep the POI database in sync with thereal-world counterparts due to the dynamic nature of business changes. Toalleviate this problem, we formulate and present a practical solution thatjointly extracts POI mentions and identifies their coupled accessibility labelsfrom unstructured text. We approach this task as a sequence tagging problem,where the goal is to producepairs fromunstructured text. This task is challenging because of two main issues: (1) POInames are often newly-coined words so as to successfully register new entitiesor brands and (2) there may exist multiple pairs in the text, whichnecessitates dealing with one-to-many or many-to-one mapping to make each POIcoupled with its accessibility label. To this end, we propose aGeographic-Enhanced and Dependency-guIded sequence Tagging (GEDIT) model toconcurrently address the two challenges. First, to alleviate challenge #1, wedevelop a geographic-enhanced pre-trained model to learn the textrepresentations. Second, to mitigate challenge #2, we apply a relational graphconvolutional network to learn the tree node representations from the parseddependency tree. Finally, we construct a neural sequence tagging model byintegrating and feeding the previously pre-learned representations into a CRFlayer. Extensive experiments conducted on a real-world dataset demonstrate thesuperiority and effectiveness of GEDIT. In addition, it has already beendeployed in production at Baidu Maps. Statistics show that the proposedsolution can save significant human effort and labor costs to deal with thesame amount of documents, which confirms that it is a practical way for POIaccessibility maintenance.

SoMeSci- A 5 Star Open Data Gold Standard Knowledge Graph of Software Mentions in Scientific Articles

Comment: Preprint of CIKM 2021 Resource Paper, 10 pages

Link:?http://arxiv.org/abs/2108.09070

Abstract

Knowledge about software used in scientific investigations is important forseveral reasons, for instance, to enable an understanding of provenance andmethods involved in data handling. However, software is usually not formallycited, but rather mentioned informally within the scholarly description of theinvestigation, raising the need for automatic information extraction anddisambiguation. Given the lack of reliable ground truth data, we presentSoMeSci (Software Mentions in Science) a gold standard knowledge graph ofsoftware mentions in scientific articles. It contains high quality annotations(IRR: $\kappa{=}.82$) of 3756 software mentions in 1367 PubMed Centralarticles. Besides the plain mention of the software, we also provide relationlabels for additional information, such as the version, the developer, a URL orcitations. Moreover, we distinguish between different types, such asapplication, plugin or programming environment, as well as different types ofmentions, such as usage or creation. To the best of our knowledge, SoMeSci isthe most comprehensive corpus about software mentions in scientific articles,providing training samples for Named Entity Recognition, Relation Extraction,Entity Disambiguation, and Entity Linking. Finally, we sketch potential usecases and provide baseline results.

Twitter User Representation using Weakly Supervised Graph Embedding

Comment: accepted at 16th International AAAI Conference on Web and Social ?Media (ICWSM-2022), direct accept from May 2021 submission, 12 pages

Link:?http://arxiv.org/abs/2108.08988

Abstract

Social media platforms provide convenient means for users to participate inmultiple online activities on various contents and create fast widespreadinteractions. However, this rapidly growing access has also increased thediverse information, and characterizing user types to understand people'slifestyle decisions shared in social media is challenging. In this paper, wepropose a weakly supervised graph embedding based framework for understandinguser types. We evaluate the user embedding learned using weak supervision overwell-being related tweets from Twitter, focusing on 'Yoga', 'Keto diet'.Experiments on real-world datasets demonstrate that the proposed frameworkoutperforms the baselines for detecting user types. Finally, we illustrate dataanalysis on different types of users (e.g., practitioner vs. promotional) fromour dataset. While we focus on lifestyle-related tweets (i.e., yoga, keto), ourmethod for constructing user representation readily generalizes to otherdomains.

SMedBERT: A Knowledge-Enhanced Pre-trained Language Model with Structured Semantics for Medical Text Mining

Comment: ACL2021

Link:?http://arxiv.org/abs/2108.08983

Abstract

Recently, the performance of Pre-trained Language Models (PLMs) has beensignificantly improved by injecting knowledge facts to enhance their abilitiesof language understanding. For medical domains, the background knowledgesources are especially useful, due to the massive medical terms and theircomplicated relations are difficult to understand in text. In this work, weintroduce SMedBERT, a medical PLM trained on large-scale medical corpora,incorporating deep structured semantic knowledge from neighbors oflinked-entity.In SMedBERT, the mention-neighbor hybrid attention is proposed tolearn heterogeneous-entity information, which infuses the semanticrepresentations of entity types into the homogeneous neighboring entitystructure. Apart from knowledge integration as external features, we propose toemploy the neighbors of linked-entities in the knowledge graph as additionalglobal contexts of text mentions, allowing them to communicate via sharedneighbors, thus enrich their semantic representations. Experiments demonstratethat SMedBERT significantly outperforms strong baselines in variousknowledge-intensive Chinese medical tasks. It also improves the performance ofother tasks such as question answering, question matching and natural languageinference.

Discriminative Region-based Multi-Label Zero-Shot Learning

Comment: Accepted to ICCV 2021. Source code is available at ?https://github.com/akshitac8/BiAM

Link:?http://arxiv.org/abs/2108.09301

Abstract

Multi-label zero-shot learning (ZSL) is a more realistic counter-part ofstandard single-label ZSL since several objects can co-exist in a naturalimage. However, the occurrence of multiple objects complicates the reasoningand requires region-specific processing of visual features to preserve theircontextual cues. We note that the best existing multi-label ZSL method takes ashared approach towards attending to region features with a common set ofattention maps for all the classes. Such shared maps lead to diffusedattention, which does not discriminatively focus on relevant locations when thenumber of classes are large. Moreover, mapping spatially-pooled visual featuresto the class semantics leads to inter-class feature entanglement, thushampering the classification. Here, we propose an alternate approach towardsregion-based discriminability-preserving multi-label zero-shot classification.Our approach maintains the spatial resolution to preserve region-levelcharacteristics and utilizes a bi-level attention module (BiAM) to enrich thefeatures by incorporating both region and scene context information. Theenriched region-level features are then mapped to the class semantics and onlytheir class predictions are spatially pooled to obtain image-level predictions,thereby keeping the multi-class features disentangled. Our approach sets a newstate of the art on two large-scale multi-label zero-shot benchmarks: NUS-WIDEand Open Images. On NUS-WIDE, our approach achieves an absolute gain of 6.9%mAP for ZSL, compared to the best published results.

MG-GAN: A Multi-Generator Model Preventing Out-of-Distribution Samples in Pedestrian Trajectory Prediction

Comment: Accepted at ICCV 2021; Code available: ?https://github.com/selflein/MG-GAN

Link:?http://arxiv.org/abs/2108.09274

Abstract

Pedestrian trajectory prediction is challenging due to its uncertain andmultimodal nature. While generative adversarial networks can learn adistribution over future trajectories, they tend to predict out-of-distributionsamples when the distribution of future trajectories is a mixture of multiple,possibly disconnected modes. To address this issue, we propose amulti-generator model for pedestrian trajectory prediction. Each generatorspecializes in learning a distribution over trajectories routing towards one ofthe primary modes in the scene, while a second network learns a categoricaldistribution over these generators, conditioned on the dynamics and sceneinput. This architecture allows us to effectively sample from specializedgenerators and to significantly reduce the out-of-distribution samples comparedto single generator methods.

Continual Learning for Image-Based Camera Localization

Comment: ICCV 2021

Link:?http://arxiv.org/abs/2108.09112

Abstract

For several emerging technologies such as augmented reality, autonomousdriving and robotics, visual localization is a critical component. Directlyregressing camera pose/3D scene coordinates from the input image using deepneural networks has shown great potential. However, such methods assume astationary data distribution with all scenes simultaneously available duringtraining. In this paper, we approach the problem of visual localization in acontinual learning setup -- whereby the model is trained on scenes in anincremental manner. Our results show that similar to the classification domain,non-stationary data induces catastrophic forgetting in deep networks for visuallocalization. To address this issue, a strong baseline based on storing andreplaying images from a fixed buffer is proposed. Furthermore, we propose a newsampling method based on coverage score (Buff-CS) that adapts the existingsampling strategies in the buffering process to the problem of visuallocalization. Results demonstrate consistent improvements over standardbuffering methods on two challenging datasets -- 7Scenes, 12Scenes, and also19Scenes by combining the former scenes.

Single Image Defocus Deblurring Using Kernel-Sharing Parallel Atrous Convolutions

Comment: Accepted to ICCV 2021

Link:?http://arxiv.org/abs/2108.09108

Abstract

This paper proposes a novel deep learning approach for single image defocusdeblurring based on inverse kernels. In a defocused image, the blur shapes aresimilar among pixels although the blur sizes can spatially vary. To utilize theproperty with inverse kernels, we exploit the observation that when only thesize of a defocus blur changes while keeping the shape, the shape of thecorresponding inverse kernel remains the same and only the scale changes. Basedon the observation, we propose a kernel-sharing parallel atrous convolutional(KPAC) block specifically designed by incorporating the property of inversekernels for single image defocus deblurring. To effectively simulate theinvariant shapes of inverse kernels with different scales, KPAC shares the sameconvolutional weights among multiple atrous convolution layers. To efficientlysimulate the varying scales of inverse kernels, KPAC consists of only a fewatrous convolution layers with different dilations and learns per-pixel scaleattentions to aggregate the outputs of the layers. KPAC also utilizes the shapeattention to combine the outputs of multiple convolution filters in each atrousconvolution layer, to deal with defocus blur with a slightly varying shape. Wedemonstrate that our approach achieves state-of-the-art performance with a muchsmaller number of parameters than previous methods.

Towards Understanding the Generative Capability of Adversarially Robust Classifiers

Comment: Accepted by ICCV 2021, Oral

Link:?http://arxiv.org/abs/2108.09093

Abstract

Recently, some works found an interesting phenomenon that adversariallyrobust classifiers can generate good images comparable to generative models. Weinvestigate this phenomenon from an energy perspective and provide a novelexplanation. We reformulate adversarial example generation, adversarialtraining, and image generation in terms of an energy function. We find thatadversarial training contributes to obtaining an energy function that is flatand has low energy around the real data, which is the key for generativecapability. Based on our new understanding, we further propose a betteradversarial training method, Joint Energy Adversarial Training (JEAT), whichcan generate high-quality images and achieve new state-of-the-art robustnessunder a wide range of attacks. The Inception Score of the images (CIFAR-10)generated by JEAT is 8.80, much better than original robust classifiers (7.50).In particular, we achieve new state-of-the-art robustness on CIFAR-10 (from57.20% to 62.04%) and CIFAR-100 (from 30.03% to 30.18%) without extra trainingdata.

AutoLay: Benchmarking amodal layout estimation for autonomous driving

Comment: published in 2020 IEEE/RSJ International Conference on Intelligent ?Robots and Systems (IROS)

Link:?http://arxiv.org/abs/2108.09047

Abstract

Given an image or a video captured from a monocular camera, amodal layoutestimation is the task of predicting semantics and occupancy in bird's eyeview. The term amodal implies we also reason about entities in the scene thatare occluded or truncated in image space. While several recent efforts havetackled this problem, there is a lack of standardization in task specification,datasets, and evaluation protocols. We address these gaps with AutoLay, adataset and benchmark for amodal layout estimation from monocular images.AutoLay encompasses driving imagery from two popular datasets: KITTI andArgoverse. In addition to fine-grained attributes such as lanes, sidewalks, andvehicles, we also provide semantically annotated 3D point clouds. We implementseveral baselines and bleeding edge approaches, and release our data and code.

Out-of-boundary View Synthesis Towards Full-Frame Video Stabilization

Comment: 10 pages, 6 figures, accepted by ICCV2021

Link:?http://arxiv.org/abs/2108.09041

Abstract

Warping-based video stabilizers smooth camera trajectory by constraining eachpixel's displacement and warp stabilized frames from unstable ones accordingly.However, since the view outside the boundary is not available during warping,the resulting holes around the boundary of the stabilized frame must bediscarded (i.e., cropping) to maintain visual consistency, and thus does leadsto a tradeoff between stability and cropping ratio. In this paper, we make afirst attempt to address this issue by proposing a new Out-of-boundary ViewSynthesis (OVS) method. By the nature of spatial coherence between adjacentframes and within each frame, OVS extrapolates the out-of-boundary view byaligning adjacent frames to each reference one. Technically, it firstcalculates the optical flow and propagates it to the outer boundary regionaccording to the affinity, and then warps pixels accordingly. OVS can beintegrated into existing warping-based stabilizers as a plug-and-play module tosignificantly improve the cropping ratio of the stabilized results. Inaddition, stability is improved because the jitter amplification effect causedby cropping and resizing is reduced. Experimental results on the NUS benchmarkshow that OVS can improve the performance of five representativestate-of-the-art methods in terms of objective metrics and subjective visualquality. The code is publicly available athttps://github.com/Annbless/OVS_Stabilization.

Video-based Person Re-identification with Spatial and Temporal Memory Networks

Comment: International Conference on Computer Vision (ICCV) 2021

Link:?http://arxiv.org/abs/2108.09039

Abstract

Video-based person re-identification (reID) aims to retrieve person videoswith the same identity as a query person across multiple cameras. Spatial andtemporal distractors in person videos, such as background clutter and partialocclusions over frames, respectively, make this task much more challenging thanimage-based person reID. We observe that spatial distractors appearconsistently in a particular location, and temporal distractors show severalpatterns, e.g., partial occlusions occur in the first few frames, where suchpatterns provide informative cues for predicting which frames to focus on(i.e., temporal attentions). Based on this, we introduce a novel Spatial andTemporal Memory Networks (STMN). The spatial memory stores features for spatialdistractors that frequently emerge across video frames, while the temporalmemory saves attentions which are optimized for typical temporal patterns inperson videos. We leverage the spatial and temporal memories to refineframe-level person representations and to aggregate the refined frame-levelfeatures into a sequence-level person representation, respectively, effectivelyhandling spatial and temporal distractors in person videos. We also introduce amemory spread loss preventing our model from addressing particular items onlyin the memories. Experimental results on standard benchmarks, including MARS,DukeMTMC-VideoReID, and LS-VID, demonstrate the effectiveness of our method.

Is it Time to Replace CNNs with Transformers for Medical Images?

Comment: Originally published at the ICCV 2021 Workshop on Computer Vision for ?Automated Medical Diagnosis (CVAMD)

Link:?http://arxiv.org/abs/2108.09038

Abstract

Convolutional Neural Networks (CNNs) have reigned for a decade as the defacto approach to automated medical image diagnosis. Recently, visiontransformers (ViTs) have appeared as a competitive alternative to CNNs,yielding similar levels of performance while possessing several interestingproperties that could prove beneficial for medical imaging tasks. In this work,we explore whether it is time to move to transformer-based models or if weshould keep working with CNNs - can we trivially switch to transformers? If so,what are the advantages and drawbacks of switching to ViTs for medical imagediagnosis? We consider these questions in a series of experiments on threemainstream medical image datasets. Our findings show that, while CNNs performbetter when trained from scratch, off-the-shelf vision transformers usingdefault hyperparameters are on par with CNNs when pretrained on ImageNet, andoutperform their CNN counterparts when pretrained using self-supervision.

AdvDrop: Adversarial Attack to DNNs by Dropping Information

Comment: Accepted to ICCV 2021

Link:?http://arxiv.org/abs/2108.09034

Abstract

Human can easily recognize visual objects with lost information: even losingmost details with only contour reserved, e.g. cartoon. However, in terms ofvisual perception of Deep Neural Networks (DNNs), the ability for recognizingabstract objects (visual objects with lost information) is still a challenge.In this work, we investigate this issue from an adversarial viewpoint: will theperformance of DNNs decrease even for the images only losing a littleinformation? Towards this end, we propose a novel adversarial attack, named\textit{AdvDrop}, which crafts adversarial examples by dropping existinginformation of images. Previously, most adversarial attacks add extradisturbing information on clean images explicitly. Opposite to previous works,our proposed work explores the adversarial robustness of DNN models in a novelperspective by dropping imperceptible details to craft adversarial examples. Wedemonstrate the effectiveness of \textit{AdvDrop} by extensive experiments, andshow that this new type of adversarial examples is more difficult to bedefended by current defense systems.

Pixel Contrastive-Consistent Semi-Supervised Semantic Segmentation

Comment: To appear in ICCV 2021

Link:?http://arxiv.org/abs/2108.09025

Abstract

We present a novel semi-supervised semantic segmentation method which jointlyachieves two desiderata of segmentation model regularities: the label-spaceconsistency property between image augmentations and the feature-spacecontrastive property among different pixels. We leverage the pixel-level L2loss and the pixel contrastive loss for the two purposes respectively. Toaddress the computational efficiency issue and the false negative noise issueinvolved in the pixel contrastive loss, we further introduce and investigateseveral negative sampling techniques. Extensive experiments demonstrate thestate-of-the-art performance of our method (PC2Seg) with the DeepLab-v3+architecture, in several challenging semi-supervised settings derived from theVOC, Cityscapes, and COCO datasets.

Online Continual Learning with Natural Distribution Shifts: An Empirical Study with Visual Data

Comment: Accepted to ICCV 2021

Link:?http://arxiv.org/abs/2108.09020

Abstract

Continual learning is the problem of learning and retaining knowledge throughtime over multiple tasks and environments. Research has primarily focused onthe incremental classification setting, where new tasks/classes are added atdiscrete time intervals. Such an "offline" setting does not evaluate theability of agents to learn effectively and efficiently, since an agent canperform multiple learning epochs without any time limitation when a task isadded. We argue that "online" continual learning, where data is a singlecontinuous stream without task boundaries, enables evaluating both informationretention and online learning efficacy. In online continual learning, eachincoming small batch of data is first used for testing and then added to thetraining set, making the problem truly online. Trained models are laterevaluated on historical data to assess information retention. We introduce anew benchmark for online continual visual learning that exhibits large scaleand natural distribution shifts. Through a large-scale analysis, we identifycritical and previously unobserved phenomena of gradient-based optimization incontinual learning, and propose effective strategies for improvinggradient-based online continual learning with real data. The source code anddataset are available in: https://github.com/IntelLabs/continuallearning.

DeFRCN: Decoupled Faster R-CNN for Few-Shot Object Detection

Comment: Accepted by ICCV 2021

Link:?http://arxiv.org/abs/2108.09017

Abstract

Few-shot object detection, which aims at detecting novel objects rapidly fromextremely few annotated examples of previously unseen classes, has attractedsignificant research interest in the community. Most existing approaches employthe Faster R-CNN as basic detection framework, yet, due to the lack of tailoredconsiderations for data-scarce scenario, their performance is often notsatisfactory. In this paper, we look closely into the conventional Faster R-CNNand analyze its contradictions from two orthogonal perspectives, namelymulti-stage (RPN vs. RCNN) and multi-task (classification vs. localization). Toresolve these issues, we propose a simple yet effective architecture, namedDecoupled Faster R-CNN (DeFRCN). To be concrete, we extend Faster R-CNN byintroducing Gradient Decoupled Layer for multi-stage decoupling andPrototypical Calibration Block for multi-task decoupling. The former is a noveldeep layer with redefining the feature-forward operation and gradient-backwardoperation for decoupling its subsequent layer and preceding layer, and thelatter is an offline prototype-based classification model with taking theproposals from detector as input and boosting the original classificationscores with additional pairwise scores for calibration. Extensive experimentson multiple benchmarks show our framework is remarkably superior to otherexisting approaches and establishes a new state-of-the-art in few-shotliterature.

Dual Projection Generative Adversarial Networks for Conditional Image Generation

Comment: Accepted at ICCV-21

Link:?http://arxiv.org/abs/2108.09016

Abstract

Conditional Generative Adversarial Networks (cGANs) extend the standardunconditional GAN framework to learning joint data-label distributions fromsamples, and have been established as powerful generative models capable ofgenerating high-fidelity imagery. A challenge of training such a model lies inproperly infusing class information into its generator and discriminator. Forthe discriminator, class conditioning can be achieved by either (1) directlyincorporating labels as input or (2) involving labels in an auxiliaryclassification loss. In this paper, we show that the former directly aligns theclass-conditioned fake-and-real data distributions$P(\text{image}|\text{class})$ ({\em data matching}), while the latter alignsdata-conditioned class distributions $P(\text{class}|\text{image})$ ({\em labelmatching}). Although class separability does not directly translate to samplequality and becomes a burden if classification itself is intrinsicallydifficult, the discriminator cannot provide useful guidance for the generatorif features of distinct classes are mapped to the same point and thus becomeinseparable. Motivated by this intuition, we propose a Dual Projection GAN(P2GAN) model that learns to balance between {\em data matching} and {\em labelmatching}. We then propose an improved cGAN model with Auxiliary Classificationthat directly aligns the fake and real conditionals$P(\text{class}|\text{image})$ by minimizing their $f$-divergence. Experimentson a synthetic Mixture of Gaussian (MoG) dataset and a variety of real-worlddatasets including CIFAR100, ImageNet, and VGGFace2 demonstrate the efficacy ofour proposed models.

GAN Inversion for Out-of-Range Images with Geometric Transformations

Comment: Accepted to ICCV 2021. For supplementary material, see ?https://kkang831.github.io/publication/ICCV_2021_BDInvert/

Link:?http://arxiv.org/abs/2108.08998

Abstract

For successful semantic editing of real images, it is critical for a GANinversion method to find an in-domain latent code that aligns with the domainof a pre-trained GAN model. Unfortunately, such in-domain latent codes can befound only for in-range images that align with the training images of a GANmodel. In this paper, we propose BDInvert, a novel GAN inversion approach tosemantic editing of out-of-range images that are geometrically unaligned withthe training images of a GAN model. To find a latent code that is semanticallyeditable, BDInvert inverts an input out-of-range image into an alternativelatent space than the original latent space. We also propose a regularizedinversion method to find a solution that supports semantic editing in thealternative space. Our experiments show that BDInvert effectively supportssemantic editing of out-of-range images with geometric transformations.

Few Shot Activity Recognition Using Variational Inference

Comment: Accepted in IJCAI 2021 - 3RD INTERNATIONAL WORKSHOP ON DEEP LEARNING ?FOR HUMAN ACTIVITY RECOGNITION. arXiv admin note: text overlap with ?arXiv:1611.09630, arXiv:1909.07945 by other authors

Link:?http://arxiv.org/abs/2108.08990

Abstract

There has been a remarkable progress in learning a model which couldrecognise novel classes with only a few labeled examples in the last few years.Few-shot learning (FSL) for action recognition is a challenging task ofrecognising novel action categories which are represented by few instances inthe training data. We propose a novel variational inference based architecturalframework (HF-AR) for few shot activity recognition. Our framework leveragesvolume-preserving Householder Flow to learn a flexible posterior distributionof the novel classes. This results in better performance as compared tostate-of-the-art few shot approaches for human activity recognition. approachconsists of base model and an adapter model. Our architecture consists of abase model and an adapter model. The base model is trained on seen classes andit computes an embedding that represent the spatial and temporal insightsextracted from the input video, e.g. combination of Resnet-152 and LSTM basedencoder-decoder model. The adapter model applies a series of Householdertransformations to compute a flexible posterior distribution that lends higheraccuracy in the few shot approach. Extensive experiments on three well-knowndatasets: UCF101, HMDB51 and Something-Something-V2, demonstrate similar orbetter performance on 1-shot and 5-shot classification as compared tostate-of-the-art few shot approaches that use only RGB frame sequence as input.To the best of our knowledge, we are the first to explore variational inferencealong with householder transformations to capture the full rank covariancematrix of posterior distribution, for few shot learning in activityrecognition.

Parsing Birdsong with Deep Audio Embeddings

Comment: IJCAI 2021 Artificial Intelligence for Social Good (AI4SG) Workshop

Link:?http://arxiv.org/abs/2108.09203

Abstract

Monitoring of bird populations has played a vital role in conservationefforts and in understanding biodiversity loss. The automation of this processhas been facilitated by both sensing technologies, such as passive acousticmonitoring, and accompanying analytical tools, such as deep learning. However,machine learning models frequently have difficulty generalizing to examples notencountered in the training data. In our work, we present a semi-supervisedapproach to identify characteristic calls and environmental noise. We utilizeseveral methods to learn a latent representation of audio samples, including aconvolutional autoencoder and two pre-trained networks, and group the resultingembeddings for a domain expert to identify cluster labels. We show that ourapproach can improve classification precision and provide insight into thelatent structure of environmental acoustic datasets.

Reinforcement Learning to Optimize Lifetime Value in Cold-Start Recommendation

Comment: Accepted by CIKM 2021

Link:?http://arxiv.org/abs/2108.09141

Abstract

Recommender system plays a crucial role in modern E-commerce platform. Due tothe lack of historical interactions between users and items, cold-startrecommendation is a challenging problem. In order to alleviate the cold-startissue, most existing methods introduce content and contextual information asthe auxiliary information. Nevertheless, these methods assume the recommendeditems behave steadily over time, while in a typical E-commerce scenario, itemsgenerally have very different performances throughout their life period. Insuch a situation, it would be beneficial to consider the long-term return fromthe item perspective, which is usually ignored in conventional methods.Reinforcement learning (RL) naturally fits such a long-term optimizationproblem, in which the recommender could identify high potential items,proactively allocate more user impressions to boost their growth, thereforeimprove the multi-period cumulative gains. Inspired by this idea, we model theprocess as a Partially Observable and Controllable Markov Decision Process(POC-MDP), and propose an actor-critic RL framework (RL-LTV) to incorporate theitem lifetime values (LTV) into the recommendation. In RL-LTV, the criticstudies historical trajectories of items and predict the future LTV of freshitem, while the actor suggests a score-based policy which maximizes the futureLTV expectation. Scores suggested by the actor are then combined with classicalranking scores in a dual-rank framework, therefore the recommendation isbalanced with the LTV consideration. Our method outperforms the strong livebaseline with a relative improvement of 8.67% and 18.03% on IPV and GMV ofcold-start items, on one of the largest E-commerce platform.

Lessons from the Clustering Analysis of a Search Space: A Centroid-based Approach to Initializing NAS

Comment: Accepted to the Workshop on 'Data Science Meets Optimisation' at ?IJCAI 2021

Link:?http://arxiv.org/abs/2108.09126

Abstract

Lots of effort in neural architecture search (NAS) research has beendedicated to algorithmic development, aiming at designing more efficient andless costly methods. Nonetheless, the investigation of the initialization ofthese techniques remain scare, and currently most NAS methodologies rely onstochastic initialization procedures, because acquiring information prior tosearch is costly. However, the recent availability of NAS benchmarks haveenabled low computational resources prototyping. In this study, we propose toaccelerate a NAS algorithm using a data-driven initialization technique,leveraging the availability of NAS benchmarks. Particularly, we proposed atwo-step methodology. First, a calibrated clustering analysis of the searchspace is performed. Second, the centroids are extracted and used to initializea NAS algorithm. We tested our proposal using Aging Evolution, an evolutionaryalgorithm, on NAS-bench-101. The results show that, compared to a randominitialization, a faster convergence and a better performance of the finalsolution is achieved.

DL-Traff: Survey and Benchmark of Deep Learning Models for Urban Traffic Prediction

Comment: This paper has been accepted by CIKM 2021 Resource Track

Link:?http://arxiv.org/abs/2108.09091

Abstract

Nowadays, with the rapid development of IoT (Internet of Things) and CPS(Cyber-Physical Systems) technologies, big spatiotemporal data are beinggenerated from mobile phones, car navigation systems, and traffic sensors. Byleveraging state-of-the-art deep learning technologies on such data, urbantraffic prediction has drawn a lot of attention in AI and IntelligentTransportation System community. The problem can be uniformly modeled with a 3Dtensor (T, N, C), where T denotes the total time steps, N denotes the size ofthe spatial domain (i.e., mesh-grids or graph-nodes), and C denotes thechannels of information. According to the specific modeling strategy, thestate-of-the-art deep learning models can be divided into three categories:grid-based, graph-based, and multivariate time-series models. In this study, wefirst synthetically review the deep traffic models as well as the widely useddatasets, then build a standard benchmark to comprehensively evaluate theirperformances with the same settings and metrics. Our study named DL-Traff isimplemented with two most popular deep learning frameworks, i.e., TensorFlowand PyTorch, which is already publicly available as two GitHub repositorieshttps://github.com/deepkashiwa20/DL-Traff-Grid andhttps://github.com/deepkashiwa20/DL-Traff-Graph. With DL-Traff, we hope todeliver a useful resource to researchers who are interested in spatiotemporaldata analysis.

FedSkel: Efficient Federated Learning on Heterogeneous Systems with Skeleton Gradients Update

Comment: CIKM 2021

Link:?http://arxiv.org/abs/2108.09081

Abstract

Federated learning aims to protect users' privacy while performing dataanalysis from different participants. However, it is challenging to guaranteethe training efficiency on heterogeneous systems due to the variouscomputational capabilities and communication bottlenecks. In this work, wepropose FedSkel to enable computation-efficient and communication-efficientfederated learning on edge devices by only updating the model's essentialparts, named skeleton networks. FedSkel is evaluated on real edge devices withimbalanced datasets. Experimental results show that it could achieve up to5.52$\times$ speedups for CONV layers' back-propagation, 1.82$\times$ speedupsfor the whole training process, and reduce 64.8% communication cost, withnegligible accuracy loss.

ASAT: Adaptively Scaled Adversarial Training in Time Series

Comment: Accepted to be appeared in Workshop on Machine Learning in Finance ?(KDD-MLF) 2021

Link:?http://arxiv.org/abs/2108.08976

Abstract

Adversarial training is a method for enhancing neural networks to improve therobustness against adversarial examples. Besides the security concerns ofpotential adversarial examples, adversarial training can also improve theperformance of the neural networks, train robust neural networks, and provideinterpretability for neural networks. In this work, we take the first step tointroduce adversarial training in time series analysis by taking the financefield as an example. Rethinking existing researches of adversarial training, wepropose the adaptively scaled adversarial training (ASAT) in time seriesanalysis, by treating data at different time slots with time-dependentimportance weights. Experimental results show that the proposed ASAT canimprove both the accuracy and the adversarial robustness of neural networks.Besides enhancing neural networks, we also propose the dimension-wiseadversarial sensitivity indicator to probe the sensitivities and importance ofinput dimensions. With the proposed indicator, we can explain the decisionbases of black box neural networks.

Explainable Reinforcement Learning for Broad-XAI: A Conceptual Framework and Survey

Comment: 22 pages, 7 figures

Link:?http://arxiv.org/abs/2108.09003

Abstract

Broad Explainable Artificial Intelligence moves away from interpretingindividual decisions based on a single datum and aims to provide integratedexplanations from multiple machine learning algorithms into a coherentexplanation of an agent's behaviour that is aligned to the communication needsof the explainee. Reinforcement Learning (RL) methods, we propose, provide apotential backbone for the cognitive model required for the development ofBroad-XAI. RL represents a suite of approaches that have had increasing successin solving a range of sequential decision-making problems. However, thesealgorithms all operate as black-box problem solvers, where they obfuscate theirdecision-making policy through a complex array of values and functions.EXplainable RL (XRL) is relatively recent field of research that aims todevelop techniques to extract concepts from the agent's: perception of theenvironment; intrinsic/extrinsic motivations/beliefs; Q-values, goals andobjectives. This paper aims to introduce a conceptual framework, called theCausal XRL Framework (CXF), that unifies the current XRL research and uses RLas a backbone to the development of Broad-XAI. Additionally, we recognise thatRL methods have the ability to incorporate a range of technologies to allowagents to adapt to their environment. CXF is designed for the incorporation ofmany standard RL extensions and integrated with external ontologies andcommunication facilities so that the agent can answer questions that explainoutcomes and justify its decisions.

·

總結

以上是生活随笔為你收集整理的今日arXiv精选 | 29篇顶会论文:ACM MM/ ICCV/ CIKM/ AAAI/ IJCAI的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

99久热在线精品视频观看 | 97超碰福利久久精品 | 色婷婷激情五月 | 男女日麻批 | 在线观看免费 | 五月天国产 | 国产毛片久久久 | 日韩在线观看一区二区 | 在线观看国产日韩欧美 | 欧美精品久久久久久久亚洲调教 | 国产精品日韩 | 91资源在线免费观看 | 99精品在线直播 | 成人在线黄色电影 | 日日插日日干 | 毛片随便看 | 麻豆传媒视频在线 | .国产精品成人自产拍在线观看6 | 国产一级高清 | 亚洲婷婷免费 | 免费在线观看成人av | 手机看片 | 99精品99| av动态图片 | 色综合色综合色综合 | 欧美日一级片 | 欧美天堂视频在线 | 曰韩在线 | 色噜噜噜 | 不卡的av中文字幕 | 久久国产精品小视频 | 99久久婷婷国产一区二区三区 | 亚洲视频电影在线 | 777奇米四色 | 日韩精品久久久久久中文字幕8 | 麻豆94tv免费版 | 天天干天天干天天色 | 米奇四色影视 | 狠狠色丁香婷婷综合基地 | 国偷自产视频一区二区久 | 91黄视频在线观看 | 亚洲性少妇性猛交wwww乱大交 | 免费看一级特黄a大片 | 果冻av在线| 成人av在线网 | 国产欧美日韩精品一区二区免费 | 91传媒激情理伦片 | 97av.com| 少妇性色午夜淫片aaaze | 久久国产乱 | 日韩最新在线视频 | 成人在线观看你懂的 | 久久不卡免费视频 | 成人在线免费看视频 | 日韩av网页 | 少妇性xxx| 久久九九网站 | 精品国产电影一区二区 | 综合天堂av久久久久久久 | 久久久免费观看完整版 | 九九免费在线观看视频 | 亚洲精品久久久久久中文传媒 | 日韩av电影免费观看 | 日韩欧美精选 | 国产一级视屏 | 国产91精品在线播放 | 欧美日韩视频观看 | 能在线观看的日韩av | 99re热精品视频 | av一本久道久久波多野结衣 | 麻豆免费在线视频 | 欧美 日韩 成人 | 香蕉久草| 婷婷5月激情5月 | 中文字幕资源网 国产 | avove黑丝 | 不卡中文字幕在线 | 亚洲伦理中文字幕 | 超碰97中文 | 伊人天天色 | 国产色秀视频 | av在线中文| 久草在线网址 | 久久国产精品久久w女人spa | 日韩高清不卡一区二区三区 | 日韩在线视频看看 | 国产亚洲一区二区在线观看 | 激情视频一区 | 经典三级一区 | 久热av | 波多野结衣在线播放视频 | av观看久久久 | 国产在线综合视频 | 看毛片网站 | 天天操天天摸天天射 | 美女视频a美女大全免费下载蜜臀 | 午夜精品在线看 | 欧美精品三级在线观看 | 黄色软件视频大全免费下载 | 国产精品va最新国产精品视频 | 日韩在线视频看看 | 日韩在线无 | 激情综合网在线观看 | 高潮久久久久久久久 | 亚洲综合激情网 | 日韩电影一区二区在线观看 | 2020天天干夜夜爽 | 午夜在线看 | 午夜av在线电影 | 亚洲人在线视频 | 91九色porn在线资源 | 国产v亚洲v | 一区二区三区久久 | 欧美乱大交 | 黄污视频网站 | 狠狠狠色丁香婷婷综合久久88 | 在线看片视频 | 色福利网站 | 中文字幕高清 | 国精产品一二三线999 | 激情大尺度视频 | 亚洲欧美国产精品 | 亚洲精品中文在线 | 国产午夜一区二区 | 久久久精品 一区二区三区 国产99视频在线观看 | 免费视频一二三 | 国产精品久久久久久久久软件 | 成人教育av| 天天天综合| avwww在线 | 91成人观看 | 狠狠操电影网 | 精品专区一区二区 | 国产精品 日韩 | 久久精品综合网 | 久久久不卡影院 | 婷婷丁香色 | 亚洲va欧美 | 夜夜操狠狠干 | 欧美乱熟臀69xxxxxx | 综合久久精品 | 日韩乱色精品一区二区 | 国产精品久久久久四虎 | 99自拍视频在线观看 | 成人四虎影院 | 97日日碰人人模人人澡分享吧 | 亚洲精品玖玖玖av在线看 | 久久久久 免费视频 | 欧美极品在线播放 | 欧美日韩中文在线视频 | 狠狠干综合 | av成人免费在线观看 | www黄色大片| 69视频永久免费观看 | 91手机电视 | 久久久亚洲国产精品麻豆综合天堂 | 伊人永久 | 日韩美女免费线视频 | 亚洲九九九在线观看 | 国产美女精品视频免费观看 | 欧美日韩视频一区二区 | 色视频在线免费观看 | 日韩精品一区不卡 | 97超碰中文| 99热手机在线观看 | 99视频精品在线 | 人人干人人做 | 国产成人一区二区三区在线观看 | 久热超碰 | 午夜丁香视频在线观看 | 色综合天天射 | 96av麻豆蜜桃一区二区 | 国产精品99久久久久久人免费 | 97超碰资源站 | 狠狠色婷婷丁香六月 | 在线观看中文字幕第一页 | 国产黄av | 国产精品黄网站在线观看 | 亚洲男女精品 | 字幕网在线观看 | 天天色天天上天天操 | 免费在线播放黄色 | 成人av一区二区三区 | 成年人天堂com| 6699私人影院 | 91麻豆国产福利在线观看 | 碰天天操天天 | 在线视频观看国产 | 黄色三级网站 | 天天做天天爱天天爽综合网 | 九九九九九九精品任你躁 | 欧美精品v国产精品v日韩精品 | 久久久久久久久久久黄色 | 成人免费影院 | 欧美亚洲精品在线观看 | 九九九九九九精品任你躁 | 天天操操操操操操 | 天天天射 | 亚洲精品91天天久久人人 | 在线观看av国产 | 日韩欧美v | 日韩欧美综合精品 | 五月天六月丁香 | 日日躁你夜夜躁你av蜜 | 亚洲一区美女视频在线观看免费 | 亚洲欧美日韩国产一区二区 | 中文国产在线观看 | 国产在线精品国自产拍影院 | 91视频最新网址 | 97在线观看视频免费 | 欧美日本中文字幕 | 国产 视频 高清 免费 | 91色吧 | 免费看网站在线 | 91在线播放综合 | 黄色aa久久| 久久伦理影院 | 日韩综合色 | 日本久久中文字幕 | 麻豆精品国产传媒 | 成 人 黄 色视频免费播放 | 午夜美女av | 国产明星视频三级a三级点| 免费的成人av | 亚洲国产日韩一区 | 大胆欧美gogo免费视频一二区 | 五月天伊人 | 综合激情 | 成人久久久久久久久久 | 久久伊人五月天 | 欧美激情在线看 | 精品久久精品 | 欧美色插| av免费看av | 超碰在线人人爱 | 日韩精品视频一二三 | 久草a视频 | 久久视频在线看 | 国产精品久久久久久久久久免费看 | 在线免费黄色 | 国产高清在线看 | 亚洲免费av片| 欧美 日韩 国产 中文字幕 | 日本精品久久久久影院 | 久久成人综合视频 | 911久久香蕉国产线看观看 | 最近免费中文视频 | 久久久久久久18 | 九九久久久| 97人人精品 | 国产在线2020| 国产中文伊人 | 中文字幕在线免费97 | 日韩欧美视频在线免费观看 | 九九视频一区 | 久久免费看视频 | 国产极品尤物在线 | 免费又黄又爽 | 探花视频在线观看免费版 | 久久大香线蕉app | 国产18精品乱码免费看 | 中文字幕一区二区三区视频 | 超碰久热 | av成年人电影 | 国产精品白丝av | 国产视频导航 | 欧美极度另类性三渗透 | 日韩在线精品视频 | 中文字幕在线播出 | 91香蕉国产 | 欧美日韩国产区 | 综合色亚洲 | av在线最新 | 国产精品久久久久婷婷 | 亚洲精品视频在线观看免费视频 | 免费日韩视频 | 日韩欧美视频在线 | 日韩在线观看网站 | 91网免费观看 | 97小视频 | 天天色天天干天天 | 免费看黄在线 | 久久视频国产精品免费视频在线 | 亚洲第一区在线播放 | 天堂av在线网址 | 黄色看片 | 日韩欧美在线观看一区二区三区 | 日日夜夜草 | 婷婷深爱网 | 91网页版在线观看 | 日韩精品一区二区三区三炮视频 | 国产亚洲精品久久网站 | 久久综合偷偷噜噜噜色 | 国产免费观看av | 久久精品韩国 | 久久国产免费看 | 免费看污黄网站 | 999成人国产| 国产精品 国内视频 | 夜夜干天天操 | 欧美精品少妇xxxxx喷水 | av电影免费 | 欧美日韩免费一区二区 | 91久久国产自产拍夜夜嗨 | av免费在线播放 | 国内精品久久久久影院日本资源 | 97av在线视频 | 成人av资源网站 | 一级黄色片网站 | 国产一级免费片 | 日韩精品一区二区三区不卡 | 超碰97av在线 | 免费视频三区 | 日日爱影视 | 国产亚洲欧美日韩高清 | 久章草在线 | 国产乱对白刺激视频不卡 | 国产 日韩 欧美 中文 在线播放 | 亚洲首页 | 成在人线av | 中文字幕一区二区三区四区在线视频 | 超碰99人人 | 在线免费黄色毛片 | 国产精品一区免费在线观看 | a在线观看视频 | av免费网 | 毛片网在线观看 | 亚洲 欧美 国产 va在线影院 | 久久ww | 黄色av大片| 中国精品一区二区 | 亚洲在线观看av | 久久精品79国产精品 | 91精品在线看 | 国产九色视频在线观看 | 欧美日韩91 | 亚洲码国产日韩欧美高潮在线播放 | 日韩精品一卡 | 中文资源在线观看 | 97热久久免费频精品99 | 日本成址在线观看 | 国产激情久久久 | www久久99| 97视频网址 | 日韩精品视频久久 | 国产成人精品久久久久蜜臀 | 日韩一区在线播放 | 久久人人97超碰国产公开结果 | 18网站在线观看 | 国产一区网址 | 欧美性生活大片 | 国产成人av综合色 | www.久久99 | 最新av中文字幕 | 天天干天天色2020 | 亚洲在线视频网站 | 天天草天天草 | 久久精品五月 | 久久久久久免费网 | 一区二区视频电影在线观看 | 美女视频一区 | 热久久视久久精品18亚洲精品 | 欧美国产精品一区二区 | 成人a毛片| 欧美一级免费在线 | 欧美日本在线观看视频 | 欧美日韩高清不卡 | 人人澡人人草 | 久久精品视频在线免费观看 | 国产美女视频免费观看的网站 | 制服丝袜天堂 | 亚洲va欧美va国产va黑人 | 婷婷午夜激情 | 在线免费黄色av | 亚洲天天综合 | 免费午夜视频在线观看 | 狠狠干狠狠艹 | 久久精品一区二区 | 天堂av中文字幕 | 国产精品第2页 | 三级在线国产 | av片免费播放 | av丁香 | 2021国产视频 | 91大神精品视频在线观看 | 在线精品在线 | 欧美一区二区三区免费观看 | 免费人成网 | 96久久| 在线观看亚洲精品 | 国产在线一区二区三区播放 | 欧美男女爱爱视频 | 国产成人精品久久久 | h动漫中文字幕 | 日本精品视频在线观看 | 亚洲一区久久久 | 又黄又色又爽 | 五月激情在线 | 国产精品欧美日韩在线观看 | 91精品成人 | 99热都是精品 | 8x8x在线观看视频 | 国产黄色片在线免费观看 | 久久欧洲视频 | 欧美极度另类性三渗透 | 精品久久久久久一区二区里番 | 婷婷国产v亚洲v欧美久久 | 夜夜嗨av色一区二区不卡 | 高清av网站 | 日本xxxx.com| 97香蕉视频| 最近中文字幕mv | 欧美一级免费片 | 久久av一区二区三区亚洲 | 久草.com| 日韩精品免费在线 | 亚一亚二国产专区 | 久久久久免费精品 | 国产精品国产三级国产 | 亚洲精品毛片一级91精品 | 在线免费观看涩涩 | 在线观看视频国产一区 | 91porny九色91啦中文 | 在线观看播放av | 亚洲欧洲精品一区二区 | 国产电影黄色av | 国产精品久久久久久久久大全 | 午夜精品久久久久久 | www视频免费在线观看 | 国产99区| 91精品久久久久久久99蜜桃 | 黄色av网站在线观看 | 视频一区在线免费观看 | 亚洲国产成人av网 | 久操久 | 一区二区不卡在线观看 | 成人午夜剧场在线观看 | 黄色资源在线观看 | 中文字幕高清在线 | 香蕉一区 | 国产一线天在线观看 | 久久久久久免费毛片精品 | 国产在线久久久 | 国产一级黄色免费看 | 久久免费国产 | 丰满少妇在线观看网站 | 国产区精品视频 | 免费看片成年人 | 开心激情五月网 | 亚洲精品88欧美一区二区 | 麻豆91精品91久久久 | av黄在线播放 | 激情视频在线观看网址 | 国产日韩一区在线 | 久久久久久国产精品美女 | 久久精彩 | 人人澡人摸人人添学生av | 成人日韩av| 国产精品精品国产婷婷这里av | 人人爽久久久噜噜噜电影 | 99人成在线观看视频 | 国产精久久久久久妇女av | 狠狠干五月天 | 人人视频网站 | 奇米影视8888在线观看大全免费 | 精品成人在线 | 99精品视频在线观看视频 | 中文字幕电影高清在线观看 | 久久伊人五月天 | 久久99热国产 | 久久艹久久 | 久久视频免费在线观看 | 欧美日韩精品区 | av在线播放免费 | 在线视频 成人 | 水蜜桃亚洲一二三四在线 | 亚洲理论影院 | av丝袜制服 | 免费视频一区二区 | 国产成人久久精品77777综合 | 黄p网站在线观看 | 国产黄影院色大全免费 | 亚洲一区av | 亚洲精品国产欧美在线观看 | 欧洲精品视频一区二区 | 久久尤物电影视频在线观看 | 激情 婷婷| 97精品国产97久久久久久免费 | 深爱开心激情 | 在线黄色国产 | 日韩欧美在线视频一区二区三区 | 97超碰国产精品女人人人爽 | 亚洲色图色 | 黄污视频网站 | 在线免费色 | 亚洲伦理一区 | 国产福利一区二区三区在线观看 | 成年在线观看 | 日韩午夜视频在线观看 | 中文字幕亚洲精品日韩 | 91麻豆精品久久久久久 | 激情五月婷婷综合网 | 欧美一区日韩一区 | av不卡中文 | 国产美女精品人人做人人爽 | 亚洲视频每日更新 | 在线观看中文av | 精品久久91 | 91亚色在线观看 | 日本在线h| 亚洲成人av一区 | 亚洲国产999 | 青青视频一区 | 中文字幕在线影院 | 99免费精品视频 | 黄色一及电影 | 麻豆视频大全 | 91视频在线免费 | 91禁在线看 | 五月天婷婷丁香花 | 国产成人精品一区二区三区在线 | 开心激情五月网 | 日本中文字幕在线观看 | 91av看片| 久久色视频 | 色婷婷狠狠五月综合天色拍 | 探花国产在线 | 中文字幕乱码一区二区 | 久久国产影院 | 亚洲精品视频在线 | 69av网| 97高清视频 | 欧美日韩在线精品一区二区 | 天天操天天摸天天干 | 欧美精品乱码久久久久久 | 精品一区二区在线看 | 国产精品18久久久久白浆 | 午夜资源站 | 久久精品亚洲精品国产欧美 | 亚洲欧洲精品一区二区精品久久久 | 久久精品国产v日韩v亚洲 | 91精品视频在线观看免费 | 久久在草 | 久草五月 | 国产精品免费在线 | 久久久亚洲电影 | www四虎影院| 精品久久中文 | 国产精彩视频一区 | 在线看黄色的网站 | 黄色大全免费网站 | 狠狠色丁香婷婷综合基地 | 91精品网站在线观看 | 日韩大片在线免费观看 | 日韩精品中文字幕av | 亚洲 欧美变态 另类 综合 | 五月在线| 亚洲永久字幕 | 99久久夜色精品国产亚洲 | 中文字幕韩在线第一页 | 91成人天堂久久成人 | 成人黄色在线电影 | 日韩视频精品在线 | 9在线观看免费高清完整版在线观看明 | 香蕉视频在线视频 | 久热国产视频 | 亚洲精品videossex少妇 | 99久久国产免费免费 | 成人国产精品电影 | 久久久久久久久久久久影院 | 婷婷色社区 | 婷婷色婷婷 | aaaaaa毛片| 日韩综合在线观看 | 久久激情视频 久久 | 日本在线免费看 | 久久久久久久久久久久久久av | 亚洲电影一级黄 | 久久国产精品99久久人人澡 | 天天操天天射天天舔 | 国产精品欧美在线 | 夜夜嗨av色一区二区不卡 | 中文字幕免费在线 | 色婷婷亚洲精品 | 免费看一级 | 久久久精品综合 | 国产精品一区二区久久久久 | 91九色视频在线观看 | 色婷婷精品大在线视频 | 午夜av激情 | 久久精品女人毛片国产 | 亚洲经典视频 | 人人玩人人添人人澡超碰 | 视频99爱 | 91手机电视 | 免费高清无人区完整版 | 午夜久久久久久久久 | 欧美久久综合 | 欧美精品九九 | 中文字幕日本在线观看 | 视频在线播放国产 | 五月婷婷视频在线 | 中文字幕在线观看一区 | www.久久婷婷 | 狠狠色丁香婷婷综合欧美 | 日韩欧美网址 | 亚洲综合色站 | 婷婷激情综合 | 91最新国产| 国产成人精品av在线观 | 日日日爽爽爽 | 99色视频在线| 97电影院网 | 97国产一区 | 国产无吗一区二区三区在线欢 | 日韩精品免费专区 | 免费a v视频 | 51久久成人国产精品麻豆 | 91av小视频 | 国产精品久久久久国产精品日日 | 日韩精品中文字幕在线观看 | 国产精品理论片在线观看 | 久久99精品波多结衣一区 | 欧美日韩有码 | 亚洲免费成人 | 五月婷婷色综合 | 黄色免费观看网址 | 999久久久久 | 国产亚洲欧美在线视频 | 免费看污片 | 精品一区二区在线免费观看 | 日韩精品免费在线观看视频 | 91亚洲精品视频 | 久久人人爽人人爽人人片av软件 | 69精品久久 | 成人午夜剧场在线观看 | 日韩sese | 国产香蕉久久 | 国产精品成人一区二区三区 | 国产亚洲精品久久久久动 | 欧美成人aa | 精品久久久久久亚洲综合网 | 日韩在线在线 | 色婷婷激情综合 | 欧美日韩高清一区 | 在线观看视频一区二区三区 | 久久视频免费在线观看 | 成人免费看视频 | 香蕉视频亚洲 | 亚洲激情网站免费观看 | 日韩激情免费视频 | 午夜 在线 | 久久久蜜桃一区二区 | 激情久久伊人 | 国产精品久久久久久吹潮天美传媒 | 国产综合激情 | 婷婷国产v亚洲v欧美久久 | 麻豆视频在线 | 97精品国产97久久久久久免费 | 九九在线精品视频 | 精品一区久久 | 亚欧日韩av | 免费观看91视频大全 | 国产精品 中文字幕 亚洲 欧美 | 色五月色开心色婷婷色丁香 | 97精品久久 | 91在线蜜桃臀 | 二区视频在线观看 | 午夜精品久久久久久久99水蜜桃 | 欧美黑人性猛交 | 免费av网址大全 | 日本少妇高清做爰视频 | 中文字幕第一页在线播放 | 成人a在线观看高清电影 | 激情视频一区二区三区 | 亚洲在线| 亚洲激情国产精品 | 久久精品网站视频 | 高清不卡毛片 | 久草av在线播放 | 婷婷五情天综123 | 8x成人免费视频 | 亚洲一级黄色av | 中文字幕av免费 | 久久综合国产伦精品免费 | 婷婷六月在线 | 精品国产伦一区二区三区观看方式 | 免费在线观看av电影 | 五月天天在线 | 久久天堂亚洲 | 欧美在线不卡一区 | 日日干天天干 | 久久精品韩国 | av线上看 | 日韩videos高潮hd | 国产精品久久久久久一区二区三区 | 国产亚洲精品无 | 最新精品国产 | 久草在线观 | 午夜精品久久久久久久久久 | 天堂麻豆 | 亚洲精品日韩av | 国产天天综合 | 一区二区精品在线观看 | 人人爽人人爽人人片 | 欧美日韩免费一区二区三区 | 日韩精品一区二区三区中文字幕 | 久久视频精品 | 精品影院一区二区久久久 | 九九热在线精品视频 | 欧美激情精品久久久久久变态 | 日日夜夜精品免费 | 精品免费视频123区 午夜久久成人 | 久久久影片 | 国产精品久久人 | 最新中文字幕在线观看视频 | 一级片观看 | 天天操天天干天天爱 | 欧美大码xxxx| 天天天综合 | 婷婷激情网站 | 九九免费在线观看视频 | 国产精品一区二区免费 | 一区二区 不卡 | 午夜在线观看影院 | 美女视频黄免费的久久 | 精品影院一区二区久久久 | 亚洲黄色区 | 国产精品一区二区在线播放 | 亚洲精区二区三区四区麻豆 | 久久久午夜精品福利内容 | 中文字幕高清免费日韩视频在线 | www.天天综合 | 六月激情网 | 久草青青在线观看 | 中文字幕人成乱码在线观看 | 久久免费看毛片 | 人人澡人人澡人人 | 激情五月婷婷综合网 | 日日爽天天 | 精品视频国产 | 国产理论在线 | 久草网站在线 | 激情av资源 | 国产美女精品在线 | 国产大片免费久久 | 人人看人人草 | 久久99精品久久久久婷婷 | 婷婷丁香av| 日本精品视频网站 | 国产精品大片 | 日韩成人免费在线 | 91精品网站 | 少妇精品久久久一区二区免费 | 成人免费看黄 | 激情av综合| 伊人网综合在线观看 | 日本成人中文字幕在线观看 | 欧美日韩国产一区 | 久久久影院一区二区三区 | 成人中心免费视频 | av大片免费| 麻豆视频免费 | 在线免费观看黄色av | 在线国产激情视频 | 国产精品久久久久久久毛片 | 狠狠干电影 | 亚洲欧美日本A∨在线观看 青青河边草观看完整版高清 | 91爱爱网址 | 狠狠操操操| 天天在线视频色 | 天天操人| 91精品国产九九九久久久亚洲 | 精品91久久久久 | 在线视频你懂得 | 天天综合天天做天天综合 | 久久这里有 | 91视频网址入口 | 国产精品影音先锋 | 亚洲在线视频网站 | 欧美了一区在线观看 | 国产一区视频免费在线观看 | 在线中文字幕av观看 | 视频在线一区二区三区 | 亚洲精品国产自产拍在线观看 | 久久成人人人人精品欧 | 日韩高清观看 | 91最新在线 | 日韩色av色资源 | 成人免费共享视频 | 在线国产高清 | av性在线| 天天插天天狠 | 97超视频 | 在线一区av | 人人插人人射 | 亚洲综合在线观看视频 | 五月天婷婷在线播放 | 国产精品综合在线 | 在线v片| www.在线看片.com | 精品视频 | 精品一区二区视频 | 亚洲精品乱码 | 二区在线播放 | 在线小视频你懂的 | 在线观看av国产 | 二区在线播放 | 亚洲成人999 | 国产亚洲精品久久19p | 日韩av免费一区二区 | 99九九99九九九视频精品 | 国产精品免费人成网站 | 日精品 | 99九九免费视频 | www.色五月.com| a级国产乱理伦片在线播放 久久久久国产精品一区 | 成人免费在线电影 | 在线a视频免费观看 | 夜夜夜夜操 | 免费看黄色小说的网站 | 欧美精品在线视频 | 国产精品欧美久久久久久 | 欧美性色19p | 日韩免费网站 | 91福利区一区二区三区 | 中文字幕欲求不满 | 欧美日韩伦理在线 | 国产原创91 | 欧美日韩国产一区二区在线观看 | 97国产精品亚洲精品 | 国产在线观看网站 | 中文字幕高清免费日韩视频在线 | 黄网站大全 | 狠狠色丁香婷婷综合最新地址 | 国产一级性生活视频 | 欧美一级看片 | 中文字幕av免费 | 欧美福利网站 | 99久久99久久综合 | 97人人网| 久草在线最新视频 | 欧美日韩国产精品久久 | 在线看日韩av | 亚洲精品tv | 国语自产偷拍精品视频偷 | 天天曰夜夜爽 | 日韩综合第一页 | 91成人精品一区在线播放 | www亚洲国产| 夜夜躁狠狠躁日日躁视频黑人 | 国产美女免费视频 | 超碰97av在线| 狠狠干五月天 | 成人午夜精品 | 日本中文字幕在线免费观看 | 一本色道久久综合亚洲二区三区 | 婷婷丁香狠狠爱 | 国产精品综合久久久久久 | 日韩精品免费一线在线观看 | 视频一区二区三区视频 | 青青草国产成人99久久 | 国产精品com| 日日操夜夜操狠狠操 | 91av手机在线 | 激情欧美在线观看 | 中文字幕中文字幕在线中文字幕三区 | 一级性av | 色婷婷在线观看视频 | 在线观看av大片 | 91成人免费在线 | 人人狠狠综合久久亚洲婷 | 国产999免费视频 | 亚洲成人黄色在线 | 在线看毛片网站 | 精品一区三区 | 啪啪av在线| 91热这里只有精品 | 国产成人精品电影久久久 | 国产精品久久久久久久久久ktv | 18pao国产成视频永久免费 | 天天色天天射天天操 | 免费成人在线视频网站 | 九九综合九九综合 | 国产激情免费 | 97免费在线观看 | 在线中文字幕视频 | 91中文字幕在线视频 | 国产手机在线观看视频 | 国产精品一区二区三区在线看 | 免费在线观看毛片网站 | 五月天久久狠狠 | 久久国产精品99国产 | 欧美日韩中文国产一区发布 | 久久久这里有精品 | 天天操天天干天天操天天干 | h动漫中文字幕 | 日韩电影一区二区三区 | 日本在线观看一区 | 国产精品a久久 | 久久1区 | 在线久热| 高清视频一区二区三区 | 亚洲精品成人av在线 | 欧洲一区二区三区精品 | 国产麻豆精品一区二区 | 久久99热这里只有精品国产 | 日本久久电影网 | 欧美aaa一级| 婷婷丁香激情五月 | 亚洲成人动漫在线观看 | 国产xxxx性hd极品 | 亚洲成aⅴ人在线观看 | 午夜视频在线瓜伦 | 有码中文字幕在线观看 | 久久成人国产 | 国产精品区二区三区日本 | 一区二区 久久 | 国产又粗又硬又爽视频 | 最近2019年日本中文免费字幕 | 久久成视频 | 精品国产乱码久久久久久三级人 | 日韩在线欧美在线 | 人人插人人射 | 日韩三级av | 成人在线播放av | 在线精品视频免费播放 | 中文视频在线看 | 日本黄色大片儿 | 亚洲黄色区 | 国产玖玖视频 | 色999在线 | 国产一区二区三区视频在线 | 亚洲一级在线观看 | 免费91麻豆精品国产自产在线观看 | 欧美孕交vivoestv另类 | 久久国产亚洲 | 久久夜视频 | 国产三级精品在线 | 国产主播99 | 国产精品久久久久久久av大片 | 国产精品人成电影在线观看 | 欧美国产精品一区二区 | 日韩精品一区二区三区免费观看 | 色婷婷视频在线 | 国产在线观看污片 | 在线中文字幕网站 | 97超碰在| 成人性生交大片免费观看网站 | 久久久噜噜噜久久久 | 国产一区二区高清不卡 | 亚洲一级黄色 | 欧美性春潮 | 久草在线这里只有精品 | 午夜骚影 | 国产精品 日韩精品 | av片一区二区 | 国产精品va最新国产精品视频 | 久草在线视频网站 | 亚洲综合导航 | 久久精品日产第一区二区三区乱码 | 中文国产成人精品久久一 | 久久九九免费视频 | 成人免费观看网站 | 久热av在线 | av电影在线观看完整版一区二区 | 亚洲欧美国产日韩在线观看 | 国产黄色在线看 | 中文字幕二区 | 69av网| 久久亚洲专区 | 深夜免费福利网站 | 欧美日韩大片在线观看 | 亚洲精品国精品久久99热一 | 精品亚洲va在线va天堂资源站 | 激情在线网 | 日韩一二三区不卡 | 日本黄色免费在线观看 | 国产成人久久av免费高清密臂 | 高清精品久久 | 99婷婷 | 在线免费国产视频 | 成人免费看片网址 | 国产精品免费视频一区二区 | 欧美一级片播放 | 亚洲少妇久久 | 日韩精品久久久久久久电影99爱 | 久99久在线视频 | 国产热re99久久6国产精品 | 五月婷婷香蕉 | 奇米影视8888在线观看大全免费 | 少妇bbb搡bbbb搡bbbb | 免费的黄色的网站 | 国产亚洲精品久久久久5区 成人h电影在线观看 | 在线亚州| www.福利| 天天舔夜夜操 | 中文字幕高清在线播放 |