日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

今日arXiv精选 | 31篇EMNLP 2021最新论文

發布時間:2024/10/8 编程问答 45 豆豆
生活随笔 收集整理的這篇文章主要介紹了 今日arXiv精选 | 31篇EMNLP 2021最新论文 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

?關于?#今日arXiv精選?

這是「AI 學術前沿」旗下的一檔欄目,編輯將每日從 arXiv 中精選高質量論文,推送給讀者。

Analysis of Language Change in Collaborative Instruction Following

Comment: Findings of EMNLP 2021 Short Paper

Link:?http://arxiv.org/abs/2109.04452

Abstract

We analyze language change over time in a collaborative, goal-orientedinstructional task, where utility-maximizing participants form conventions andincrease their expertise. Prior work studied such scenarios mostly in thecontext of reference games, and consistently found that language complexity isreduced along multiple dimensions, such as utterance length, as conventions areformed. In contrast, we find that, given the ability to increase instructionutility, instructors increase language complexity along these previouslystudied dimensions to better collaborate with increasingly skilled instructionfollowers.

Vision-and-Language or Vision-for-Language? On Cross-Modal Influence in Multimodal Transformers

Comment: EMNLP 2021

Link:?http://arxiv.org/abs/2109.04448

Abstract

Pretrained vision-and-language BERTs aim to learn representations thatcombine information from both modalities. We propose a diagnostic method basedon cross-modal input ablation to assess the extent to which these modelsactually integrate cross-modal information. This method involves ablatinginputs from one modality, either entirely or selectively based on cross-modalgrounding alignments, and evaluating the model prediction performance on theother modality. Model performance is measured by modality-specific tasks thatmirror the model pretraining objectives (e.g. masked language modelling fortext). Models that have learned to construct cross-modal representations usingboth modalities are expected to perform worse when inputs are missing from amodality. We find that recently proposed models have much greater relativedifficulty predicting text when visual information is ablated, compared topredicting visual object categories when text is ablated, indicating that thesemodels are not symmetrically cross-modal.

HintedBT: Augmenting Back-Translation with Quality and Transliteration Hints

Comment: 17 pages including references and appendix. Accepted at EMNLP 2021

Link:?http://arxiv.org/abs/2109.04443

Abstract

Back-translation (BT) of target monolingual corpora is a widely used dataaugmentation strategy for neural machine translation (NMT), especially forlow-resource language pairs. To improve effectiveness of the available BT data,we introduce HintedBT -- a family of techniques which provides hints (throughtags) to the encoder and decoder. First, we propose a novel method of usingboth high and low quality BT data by providing hints (as source tags on theencoder) to the model about the quality of each source-target pair. We don'tfilter out low quality data but instead show that these hints enable the modelto learn effectively from noisy data. Second, we address the problem ofpredicting whether a source token needs to be translated or transliterated tothe target language, which is common in cross-script translation tasks (i.e.,where source and target do not share the written script). For such cases, wepropose training the model with additional hints (as target tags on thedecoder) that provide information about the operation required on the source(translation or both translation and transliteration). We conduct experimentsand detailed analyses on standard WMT benchmarks for three cross-scriptlow/medium-resource language pairs: {Hindi,Gujarati,Tamil}-to-English. Ourmethods compare favorably with five strong and well established baselines. Weshow that using these hints, both separately and together, significantlyimproves translation quality and leads to state-of-the-art performance in allthree language pairs in corresponding bilingual settings.

AStitchInLanguageModels: Dataset and Methods for the Exploration of Idiomaticity in Pre-Trained Language Models

Comment: Findings of EMNLP 2021. Code available at: ?https://github.com/H-TayyarMadabushi/AStitchInLanguageModels

Link:?http://arxiv.org/abs/2109.04413

Abstract

Despite their success in a variety of NLP tasks, pre-trained language models,due to their heavy reliance on compositionality, fail in effectively capturingthe meanings of multiword expressions (MWEs), especially idioms. Therefore,datasets and methods to improve the representation of MWEs are urgently needed.Existing datasets are limited to providing the degree of idiomaticity ofexpressions along with the literal and, where applicable, (a single)non-literal interpretation of MWEs. This work presents a novel dataset ofnaturally occurring sentences containing MWEs manually classified into afine-grained set of meanings, spanning both English and Portuguese. We use thisdataset in two tasks designed to test i) a language model's ability to detectidiom usage, and ii) the effectiveness of a language model in generatingrepresentations of sentences containing idioms. Our experiments demonstratethat, on the task of detecting idiomatic usage, these models perform reasonablywell in the one-shot and few-shot scenarios, but that there is significantscope for improvement in the zero-shot scenario. On the task of representingidiomaticity, we find that pre-training is not always effective, whilefine-tuning could provide a sample efficient method of learning representationsof sentences containing MWEs.

Learning from Uneven Training Data: Unlabeled, Single Label, and Multiple Labels

Comment: EMNLP 2021; Our code is publicly available at ?https://github.com/szhang42/Uneven_training_data

Link:?http://arxiv.org/abs/2109.04408

Abstract

Training NLP systems typically assumes access to annotated data that has asingle human label per example. Given imperfect labeling from annotators andinherent ambiguity of language, we hypothesize that single label is notsufficient to learn the spectrum of language interpretation. We explore newlabel annotation distribution schemes, assigning multiple labels per examplefor a small subset of training examples. Introducing such multi label examplesat the cost of annotating fewer examples brings clear gains on natural languageinference task and entity typing task, even when we simply first train with asingle label data and then fine tune with multi label examples. Extending aMixUp data augmentation framework, we propose a learning algorithm that canlearn from uneven training examples (with zero, one, or multiple labels). Thisalgorithm efficiently combines signals from uneven training data and bringsadditional gains in low annotation budget and cross domain settings. Together,our method achieves consistent gains in both accuracy and label distributionmetrics in two tasks, suggesting training with uneven training data can bebeneficial for many NLP tasks.

All Bark and No Bite: Rogue Dimensions in Transformer Language Models Obscure Representational Quality

Comment: Accepted at EMNLP 2021

Link:?http://arxiv.org/abs/2109.04404

Abstract

Similarity measures are a vital tool for understanding how language modelsrepresent and process language. Standard representational similarity measuressuch as cosine similarity and Euclidean distance have been successfully used instatic word embedding models to understand how words cluster in semantic space.Recently, these measures have been applied to embeddings from contextualizedmodels such as BERT and GPT-2. In this work, we call into question theinformativity of such measures for contextualized language models. We find thata small number of rogue dimensions, often just 1-3, dominate these measures.Moreover, we find a striking mismatch between the dimensions that dominatesimilarity measures and those which are important to the behavior of the model.We show that simple postprocessing techniques such as standardization are ableto correct for rogue dimensions and reveal underlying representational quality.We argue that accounting for rogue dimensions is essential for anysimilarity-based analysis of contextual language models.

Cross-lingual Transfer for Text Classification with Dictionary-based Heterogeneous Graph

Comment: Published in Findings of EMNLP 2021

Link:?http://arxiv.org/abs/2109.04400

Abstract

In cross-lingual text classification, it is required that task-specifictraining data in high-resource source languages are available, where the taskis identical to that of a low-resource target language. However, collectingsuch training data can be infeasible because of the labeling cost, taskcharacteristics, and privacy concerns. This paper proposes an alternativesolution that uses only task-independent word embeddings of high-resourcelanguages and bilingual dictionaries. First, we construct a dictionary-basedheterogeneous graph (DHG) from bilingual dictionaries. This opens thepossibility to use graph neural networks for cross-lingual transfer. Theremaining challenge is the heterogeneity of DHG because multiple languages areconsidered. To address this challenge, we propose dictionary-basedheterogeneous graph neural network (DHGNet) that effectively handles theheterogeneity of DHG by two-step aggregations, which are word-level andlanguage-level aggregations. Experimental results demonstrate that our methodoutperforms pretrained models even though it does not access to large corpora.Furthermore, it can perform well even though dictionaries contain manyincorrect translations. Its robustness allows the usage of a wider range ofdictionaries such as an automatically constructed dictionary and crowdsourceddictionary, which are convenient for real-world applications.

Contrasting Human- and Machine-Generated Word-Level Adversarial Examples for Text Classification

Comment: EMNLP 2021

Link:?http://arxiv.org/abs/2109.04385

Abstract

Research shows that natural language processing models are generallyconsidered to be vulnerable to adversarial attacks; but recent work has drawnattention to the issue of validating these adversarial inputs against certaincriteria (e.g., the preservation of semantics and grammaticality). Enforcingconstraints to uphold such criteria may render attacks unsuccessful, raisingthe question of whether valid attacks are actually feasible. In this work, weinvestigate this through the lens of human language ability. We report oncrowdsourcing studies in which we task humans with iteratively modifying wordsin an input text, while receiving immediate model feedback, with the aim ofcausing a sentiment classification model to misclassify the example. Ourfindings suggest that humans are capable of generating a substantial amount ofadversarial examples using semantics-preserving word substitutions. We analyzehow human-generated adversarial examples compare to the recently proposedTextFooler, Genetic, BAE and SememePSO attack algorithms on the dimensionsnaturalness, preservation of sentiment, grammaticality and substitution rate.Our findings suggest that human-generated adversarial examples are not moreable than the best algorithms to generate natural-reading, sentiment-preservingexamples, though they do so by being much more computationally efficient.

Multi-granularity Textual Adversarial Attack with Behavior Cloning

Comment: Accepted by the main conference of EMNLP 2021

Link:?http://arxiv.org/abs/2109.04367

Abstract

Recently, the textual adversarial attack models become increasingly populardue to their successful in estimating the robustness of NLP models. However,existing works have obvious deficiencies. (1) They usually consider only asingle granularity of modification strategies (e.g. word-level orsentence-level), which is insufficient to explore the holistic textual spacefor generation; (2) They need to query victim models hundreds of times to makea successful attack, which is highly inefficient in practice. To address suchproblems, in this paper we propose MAYA, a Multi-grAnularitY Attack model toeffectively generate high-quality adversarial samples with fewer queries tovictim models. Furthermore, we propose a reinforcement-learning based method totrain a multi-granularity attack agent through behavior cloning with the expertknowledge from our MAYA algorithm to further reduce the query times.Additionally, we also adapt the agent to attack black-box models that onlyoutput labels without confidence scores. We conduct comprehensive experimentsto evaluate our attack models by attacking BiLSTM, BERT and RoBERTa in twodifferent black-box attack settings and three benchmark datasets. Experimentalresults show that our models achieve overall better attacking performance andproduce more fluent and grammatical adversarial samples compared to baselinemodels. Besides, our adversarial attack agent significantly reduces the querytimes in both attack settings. Our codes are released athttps://github.com/Yangyi-Chen/MAYA.

Uncertainty Measures in Neural Belief Tracking and the Effects on Dialogue Policy Performance

Comment: 14 pages, 2 figures, accepted at EMNLP 2021 Main conference, Code at: ?https://gitlab.cs.uni-duesseldorf.de/general/dsml/setsumbt-public

Link:?http://arxiv.org/abs/2109.04349

Abstract

The ability to identify and resolve uncertainty is crucial for the robustnessof a dialogue system. Indeed, this has been confirmed empirically on systemsthat utilise Bayesian approaches to dialogue belief tracking. However, suchsystems consider only confidence estimates and have difficulty scaling to morecomplex settings. Neural dialogue systems, on the other hand, rarely takeuncertainties into account. They are therefore overconfident in their decisionsand less robust. Moreover, the performance of the tracking task is oftenevaluated in isolation, without consideration of its effect on the downstreampolicy optimisation. We propose the use of different uncertainty measures inneural belief tracking. The effects of these measures on the downstream task ofpolicy optimisation are evaluated by adding selected measures of uncertainty tothe feature space of the policy and training policies through interaction witha user simulator. Both human and simulated user results show that incorporatingthese measures leads to improvements both of the performance and of therobustness of the downstream dialogue policy. This highlights the importance ofdeveloping neural dialogue belief trackers that take uncertainty into account.

Learning Opinion Summarizers by Selecting Informative Reviews

Comment: EMNLP 2021

Link:?http://arxiv.org/abs/2109.04325

Abstract

Opinion summarization has been traditionally approached with unsupervised,weakly-supervised and few-shot learning techniques. In this work, we collect alarge dataset of summaries paired with user reviews for over 31,000 products,enabling supervised training. However, the number of reviews per product islarge (320 on average), making summarization - and especially training asummarizer - impractical. Moreover, the content of many reviews is notreflected in the human-written summaries, and, thus, the summarizer trained onrandom review subsets hallucinates. In order to deal with both of thesechallenges, we formulate the task as jointly learning to select informativesubsets of reviews and summarizing the opinions expressed in these subsets. Thechoice of the review subset is treated as a latent variable, predicted by asmall and simple selector. The subset is then fed into a more powerfulsummarizer. For joint training, we use amortized variational inference andpolicy gradient methods. Our experiments demonstrate the importance ofselecting informative reviews resulting in improved quality of summaries andreduced hallucinations.

Translate & Fill: Improving Zero-Shot Multilingual Semantic Parsing with Synthetic Data

Comment: Accepted to EMNLP 2021 (Findings)

Link:?http://arxiv.org/abs/2109.04319

Abstract

While multilingual pretrained language models (LMs) fine-tuned on a singlelanguage have shown substantial cross-lingual task transfer capabilities, thereis still a wide performance gap in semantic parsing tasks when target languagesupervision is available. In this paper, we propose a novel Translate-and-Fill(TaF) method to produce silver training data for a multilingual semanticparser. This method simplifies the popular Translate-Align-Project (TAP)pipeline and consists of a sequence-to-sequence filler model that constructs afull parse conditioned on an utterance and a view of the same parse. Our filleris trained on English data only but can accurately complete instances in otherlanguages (i.e., translations of the English training utterances), in azero-shot fashion. Experimental results on three multilingual semantic parsingdatasets show that data augmentation with TaF reaches accuracies competitivewith similar systems which rely on traditional alignment techniques.

MATE: Multi-view Attention for Table Transformer Efficiency

Comment: Accepted to EMNLP 2021

Link:?http://arxiv.org/abs/2109.04312

Abstract

This work presents a sparse-attention Transformer architecture for modelingdocuments that contain large tables. Tables are ubiquitous on the web, and arerich in information. However, more than 20% of relational tables on the webhave 20 or more rows (Cafarella et al., 2008), and these large tables present achallenge for current Transformer models, which are typically limited to 512tokens. Here we propose MATE, a novel Transformer architecture designed tomodel the structure of web tables. MATE uses sparse attention in a way thatallows heads to efficiently attend to either rows or columns in a table. Thisarchitecture scales linearly with respect to speed and memory, and can handledocuments containing more than 8000 tokens with current accelerators. MATE alsohas a more appropriate inductive bias for tabular data, and sets a newstate-of-the-art for three table reasoning datasets. For HybridQA (Chen et al.,2020b), a dataset that involves large documents containing tables, we improvethe best prior result by 19 points.

Generalised Unsupervised Domain Adaptation of Neural Machine Translation with Cross-Lingual Data Selection

Comment: EMNLP2021

Link:?http://arxiv.org/abs/2109.04292

Abstract

This paper considers the unsupervised domain adaptation problem for neuralmachine translation (NMT), where we assume the access to only monolingual textin either the source or target language in the new domain. We propose across-lingual data selection method to extract in-domain sentences in themissing language side from a large generic monolingual corpus. Our proposedmethod trains an adaptive layer on top of multilingual BERT by contrastivelearning to align the representation between the source and target language.This then enables the transferability of the domain classifier between thelanguages in a zero-shot manner. Once the in-domain data is detected by theclassifier, the NMT model is then adapted to the new domain by jointly learningtranslation and domain discrimination tasks. We evaluate our cross-lingual dataselection method on NMT across five diverse domains in three language pairs, aswell as a real-world scenario of translation for COVID-19. The results showthat our proposed method outperforms other selection baselines up to +1.5 BLEUscore.

Cartography Active Learning

Comment: Findings EMNLP 2021

Link:?http://arxiv.org/abs/2109.04282

Abstract

We propose Cartography Active Learning (CAL), a novel Active Learning (AL)algorithm that exploits the behavior of the model on individual instancesduring training as a proxy to find the most informative instances for labeling.CAL is inspired by data maps, which were recently proposed to derive insightsinto dataset quality (Swayamdipta et al., 2020). We compare our method onpopular text classification tasks to commonly used AL strategies, which insteadrely on post-training behavior. We demonstrate that CAL is competitive to othercommon AL methods, showing that training dynamics derived from small seed datacan be successfully used for AL. We provide insights into our new AL method byanalyzing batch-level statistics utilizing the data maps. Our results furthershow that CAL results in a more data-efficient learning strategy, achievingcomparable or better results with considerably less training data.

Efficient Nearest Neighbor Language Models

Comment: EMNLP 2021

Link:?http://arxiv.org/abs/2109.04212

Abstract

Non-parametric neural language models (NLMs) learn predictive distributionsof text utilizing an external datastore, which allows them to learn throughexplicitly memorizing the training datapoints. While effective, these modelsoften require retrieval from a large datastore at test time, significantlyincreasing the inference overhead and thus limiting the deployment ofnon-parametric NLMs in practical applications. In this paper, we take therecently proposed $k$-nearest neighbors language model (Khandelwal et al.,2019) as an example, exploring methods to improve its efficiency along variousdimensions. Experiments on the standard WikiText-103 benchmark anddomain-adaptation datasets show that our methods are able to achieve up to a 6xspeed-up in inference speed while retaining comparable performance. Theempirical analysis we present may provide guidelines for future researchseeking to develop or deploy more efficient non-parametric NLMs.

Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning

Comment: Accepted at EMNLP 2021

Link:?http://arxiv.org/abs/2109.04144

Abstract

Recent prompt-based approaches allow pretrained language models to achievestrong performances on few-shot finetuning by reformulating downstream tasks asa language modeling problem. In this work, we demonstrate that, despite itsadvantages on low data regimes, finetuned prompt-based models for sentence pairclassification tasks still suffer from a common pitfall of adopting inferenceheuristics based on lexical overlap, e.g., models incorrectly assuming asentence pair is of the same meaning because they consist of the same set ofwords. Interestingly, we find that this particular inference heuristic issignificantly less present in the zero-shot evaluation of the prompt-basedmodel, indicating how finetuning can be destructive to useful knowledge learnedduring the pretraining. We then show that adding a regularization thatpreserves pretraining weights is effective in mitigating this destructivetendency of few-shot finetuning. Our evaluation on three datasets demonstratespromising improvements on the three corresponding challenge datasets used todiagnose the inference heuristics.

Word-Level Coreference Resolution

Comment: Accepted to EMNLP-2021

Link:?http://arxiv.org/abs/2109.04127

Abstract

Recent coreference resolution models rely heavily on span representations tofind coreference links between word spans. As the number of spans is $O(n^2)$in the length of text and the number of potential links is $O(n^4)$, variouspruning techniques are necessary to make this approach computationallyfeasible. We propose instead to consider coreference links between individualwords rather than word spans and then reconstruct the word spans. This reducesthe complexity of the coreference model to $O(n^2)$ and allows it to considerall potential mentions without pruning any of them out. We also demonstratethat, with these changes, SpanBERT for coreference resolution will besignificantly outperformed by RoBERTa. While being highly efficient, our modelperforms competitively with recent coreference resolution systems on theOntoNotes benchmark.

MapRE: An Effective Semantic Mapping Approach for Low-resource Relation Extraction

Comment: Accepted as a long paper in the main conference of EMNLP 2021

Link:?http://arxiv.org/abs/2109.04108

Abstract

Neural relation extraction models have shown promising results in recentyears; however, the model performance drops dramatically given only a fewtraining samples. Recent works try leveraging the advance in few-shot learningto solve the low resource problem, where they train label-agnostic models todirectly compare the semantic similarities among context sentences in theembedding space. However, the label-aware information, i.e., the relation labelthat contains the semantic knowledge of the relation itself, is often neglectedfor prediction. In this work, we propose a framework considering bothlabel-agnostic and label-aware semantic mapping information for low resourcerelation extraction. We show that incorporating the above two types of mappinginformation in both pretraining and fine-tuning can significantly improve themodel performance on low-resource relation extraction tasks.

TimeTraveler: Reinforcement Learning for Temporal Knowledge Graph Forecasting

Comment: EMNLP 2021

Link:?http://arxiv.org/abs/2109.04101

Abstract

Temporal knowledge graph (TKG) reasoning is a crucial task that has gainedincreasing research interest in recent years. Most existing methods focus onreasoning at past timestamps to complete the missing facts, and there are onlya few works of reasoning on known TKGs to forecast future facts. Compared withthe completion task, the forecasting task is more difficult that faces two mainchallenges: (1) how to effectively model the time information to handle futuretimestamps? (2) how to make inductive inference to handle previously unseenentities that emerge over time? To address these challenges, we propose thefirst reinforcement learning method for forecasting. Specifically, the agenttravels on historical knowledge graph snapshots to search for the answer. Ourmethod defines a relative time encoding function to capture the timespaninformation, and we design a novel time-shaped reward based on Dirichletdistribution to guide the model learning. Furthermore, we propose a novelrepresentation method for unseen entities to improve the inductive inferenceability of the model. We evaluate our method for this link prediction task atfuture timestamps. Extensive experiments on four benchmark datasets demonstratesubstantial performance improvement meanwhile with higher explainability, lesscalculation, and fewer parameters when compared with existing state-of-the-artmethods.

A Three-Stage Learning Framework for Low-Resource Knowledge-Grounded Dialogue Generation

Comment: Accepted by EMNLP 2021 main conference

Link:?http://arxiv.org/abs/2109.04096

Abstract

Neural conversation models have shown great potentials towards generatingfluent and informative responses by introducing external background knowledge.Nevertheless, it is laborious to construct such knowledge-grounded dialogues,and existing models usually perform poorly when transfer to new domains withlimited training samples. Therefore, building a knowledge-grounded dialoguesystem under the low-resource setting is a still crucial issue. In this paper,we propose a novel three-stage learning framework based on weakly supervisedlearning which benefits from large scale ungrounded dialogues and unstructuredknowledge base. To better cooperate with this framework, we devise a variant ofTransformer with decoupled decoder which facilitates the disentangled learningof response generation and knowledge incorporation. Evaluation results on twobenchmarks indicate that our approach can outperform other state-of-the-artmethods with less training data, and even in zero-resource scenario, ourapproach still performs well.

Debiasing Methods in Natural Language Understanding Make Bias More Accessible

Comment: Accepted at EMNLP 2021

Link:?http://arxiv.org/abs/2109.04095

Abstract

Model robustness to bias is often determined by the generalization oncarefully designed out-of-distribution datasets. Recent debiasing methods innatural language understanding (NLU) improve performance on such datasets bypressuring models into making unbiased predictions. An underlying assumptionbehind such methods is that this also leads to the discovery of more robustfeatures in the model's inner representations. We propose a generalprobing-based framework that allows for post-hoc interpretation of biases inlanguage models, and use an information-theoretic approach to measure theextractability of certain biases from the model's representations. Weexperiment with several NLU datasets and known biases, and show that,counter-intuitively, the more a language model is pushed towards a debiasedregime, the more bias is actually encoded in its inner representations.

Thinking Clearly, Talking Fast: Concept-Guided Non-Autoregressive Generation for Open-Domain Dialogue Systems

Comment: Accepted by EMNLP 2021, 12 pages

Link:?http://arxiv.org/abs/2109.04084

Abstract

Human dialogue contains evolving concepts, and speakers naturally associatemultiple concepts to compose a response. However, current dialogue models withthe seq2seq framework lack the ability to effectively manage concepttransitions and can hardly introduce multiple concepts to responses in asequential decoding manner. To facilitate a controllable and coherent dialogue,in this work, we devise a concept-guided non-autoregressive model (CG-nAR) foropen-domain dialogue generation. The proposed model comprises a multi-conceptplanning module that learns to identify multiple associated concepts from aconcept graph and a customized Insertion Transformer that performsconcept-guided non-autoregressive generation to complete a response. Theexperimental results on two public datasets show that CG-nAR can producediverse and coherent responses, outperforming state-of-the-art baselines inboth automatic and human evaluations with substantially faster inference speed.

Low-Resource Dialogue Summarization with Domain-Agnostic Multi-Source Pretraining

Comment: Accepted by EMNLP 2021, 12 pages

Link:?http://arxiv.org/abs/2109.04080

Abstract

With the rapid increase in the volume of dialogue data from daily life, thereis a growing demand for dialogue summarization. Unfortunately, training a largesummarization model is generally infeasible due to the inadequacy of dialoguedata with annotated summaries. Most existing works for low-resource dialoguesummarization directly pretrain models in other domains, e.g., the news domain,but they generally neglect the huge difference between dialogues andconventional articles. To bridge the gap between out-of-domain pretraining andin-domain fine-tuning, in this work, we propose a multi-source pretrainingparadigm to better leverage the external summary data. Specifically, we exploitlarge-scale in-domain non-summary data to separately pretrain the dialogueencoder and the summary decoder. The combined encoder-decoder model is thenpretrained on the out-of-domain summary data using adversarial critics, aimingto facilitate domain-agnostic summarization. The experimental results on twopublic datasets show that with only limited training data, our approachachieves competitive performance and generalizes well in different dialoguescenarios.

Table-based Fact Verification with Salience-aware Learning

Comment: EMNLP 2021 (Findings)

Link:?http://arxiv.org/abs/2109.04053

Abstract

Tables provide valuable knowledge that can be used to verify textualstatements. While a number of works have considered table-based factverification, direct alignments of tabular data with tokens in textualstatements are rarely available. Moreover, training a generalized factverification model requires abundant labeled training data. In this paper, wepropose a novel system to address these problems. Inspired by counterfactualcausality, our system identifies token-level salience in the statement withprobing-based salience estimation. Salience estimation allows enhanced learningof fact verification from two perspectives. From one perspective, our systemconducts masked salient token prediction to enhance the model for alignment andreasoning between the table and the statement. From the other perspective, oursystem applies salience-aware data augmentation to generate a more diverse setof training instances by replacing non-salient terms. Experimental results onTabFact show the effective improvement by the proposed salience-aware learningtechniques, leading to the new SOTA performance on the benchmark. Our code ispublicly available at https://github.com/luka-group/Salience-aware-Learning .

Distributionally Robust Multilingual Machine Translation

Comment: Long paper accepted by EMNLP2021 main conference

Link:?http://arxiv.org/abs/2109.04020

Abstract

Multilingual neural machine translation (MNMT) learns to translate multiplelanguage pairs with a single model, potentially improving both the accuracy andthe memory-efficiency of deployed models. However, the heavy data imbalancebetween languages hinders the model from performing uniformly across languagepairs. In this paper, we propose a new learning objective for MNMT based ondistributionally robust optimization, which minimizes the worst-case expectedloss over the set of language pairs. We further show how to practicallyoptimize this objective for large translation corpora using an iterated bestresponse scheme, which is both effective and incurs negligible additionalcomputational cost compared to standard empirical risk minimization. We performextensive experiments on three sets of languages from two datasets and showthat our method consistently outperforms strong baseline methods in terms ofaverage and per-language performance under both many-to-one and one-to-manytranslation settings.

Graphine: A Dataset for Graph-aware Terminology Definition Generation

Comment: EMNLP 2021

Link:?http://arxiv.org/abs/2109.04018

Abstract

Precisely defining the terminology is the first step in scientificcommunication. Developing neural text generation models for definitiongeneration can circumvent the labor-intensity curation, further acceleratingscientific discovery. Unfortunately, the lack of large-scale terminologydefinition dataset hinders the process toward definition generation. In thispaper, we present a large-scale terminology definition dataset Graphinecovering 2,010,648 terminology definition pairs, spanning 227 biomedicalsubdisciplines. Terminologies in each subdiscipline further form a directedacyclic graph, opening up new avenues for developing graph-aware textgeneration models. We then proposed a novel graph-aware definition generationmodel Graphex that integrates transformer with graph neural network. Our modeloutperforms existing text generation models by exploiting the graph structureof terminologies. We further demonstrated how Graphine can be used to evaluatepretrained language models, compare graph representation learning methods andpredict sentence granularity. We envision Graphine to be a unique resource fordefinition generation and many other NLP tasks in biomedicine.

Weakly-Supervised Visual-Retriever-Reader for Knowledge-based Question Answering

Comment: accepted at EMNLP 2021

Link:?http://arxiv.org/abs/2109.04014

Abstract

Knowledge-based visual question answering (VQA) requires answering questionswith external knowledge in addition to the content of images. One dataset thatis mostly used in evaluating knowledge-based VQA is OK-VQA, but it lacks a goldstandard knowledge corpus for retrieval. Existing work leverage differentknowledge bases (e.g., ConceptNet and Wikipedia) to obtain external knowledge.Because of varying knowledge bases, it is hard to fairly compare models'performance. To address this issue, we collect a natural language knowledgebase that can be used for any VQA system. Moreover, we propose a VisualRetriever-Reader pipeline to approach knowledge-based VQA. The visual retrieveraims to retrieve relevant knowledge, and the visual reader seeks to predictanswers based on given knowledge. We introduce various ways to retrieveknowledge using text and images and two reader styles: classification andextraction. Both the retriever and reader are trained with weak supervision.Our experimental results show that a good retriever can significantly improvethe reader's performance on the OK-VQA challenge. The code and corpus areprovided in https://github.com/luomancs/retriever\_reader\_for\_okvqa.git

Graph Based Network with Contextualized Representations of Turns in Dialogue

Comment: EMNLP 2021

Link:?http://arxiv.org/abs/2109.04008

Abstract

Dialogue-based relation extraction (RE) aims to extract relation(s) betweentwo arguments that appear in a dialogue. Because dialogues have thecharacteristics of high personal pronoun occurrences and low informationdensity, and since most relational facts in dialogues are not supported by anysingle sentence, dialogue-based relation extraction requires a comprehensiveunderstanding of dialogue. In this paper, we propose the TUrn COntext awaREGraph Convolutional Network (TUCORE-GCN) modeled by paying attention to the waypeople understand dialogues. In addition, we propose a novel approach whichtreats the task of emotion recognition in conversations (ERC) as adialogue-based RE. Experiments on a dialogue-based RE dataset and three ERCdatasets demonstrate that our model is very effective in various dialogue-basednatural language understanding tasks. In these experiments, TUCORE-GCNoutperforms the state-of-the-art models on most of the benchmark datasets. Ourcode is available at https://github.com/BlackNoodle/TUCORE-GCN.

Competence-based Curriculum Learning for Multilingual Machine Translation

Comment: Accepted by Findings of EMNLP 2021. We release the codes at ?https://github.com/zml24/ccl-m

Link:?http://arxiv.org/abs/2109.04002

Abstract

Currently, multilingual machine translation is receiving more and moreattention since it brings better performance for low resource languages (LRLs)and saves more space. However, existing multilingual machine translation modelsface a severe challenge: imbalance. As a result, the translation performance ofdifferent languages in multilingual translation models are quite different. Weargue that this imbalance problem stems from the different learningcompetencies of different languages. Therefore, we focus on balancing thelearning competencies of different languages and propose Competence-basedCurriculum Learning for Multilingual Machine Translation, named CCL-M.Specifically, we firstly define two competencies to help schedule the highresource languages (HRLs) and the low resource languages: 1) Self-evaluatedCompetence, evaluating how well the language itself has been learned; and 2)HRLs-evaluated Competence, evaluating whether an LRL is ready to be learnedaccording to HRLs' Self-evaluated Competence. Based on the above competencies,we utilize the proposed CCL-M algorithm to gradually add new languages into thetraining set in a curriculum learning manner. Furthermore, we propose a novelcompetenceaware dynamic balancing sampling strategy for better selectingtraining samples in multilingual training. Experimental results show that ourapproach has achieved a steady and significant performance gain compared to theprevious state-of-the-art approach on the TED talks dataset.

Bag of Tricks for Optimizing Transformer Efficiency

Comment: accepted by EMNLP (Findings) 2021

Link:?http://arxiv.org/abs/2109.04030

Abstract

Improving Transformer efficiency has become increasingly attractive recently.A wide range of methods has been proposed, e.g., pruning, quantization, newarchitectures and etc. But these methods are either sophisticated inimplementation or dependent on hardware. In this paper, we show that theefficiency of Transformer can be improved by combining some simple andhardware-agnostic methods, including tuning hyper-parameters, better designchoices and training strategies. On the WMT news translation tasks, we improvethe inference efficiency of a strong Transformer system by 3.80X on CPU and2.52X on GPU. The code is publicly available athttps://github.com/Lollipop321/mini-decoder-network.

總結

以上是生活随笔為你收集整理的今日arXiv精选 | 31篇EMNLP 2021最新论文的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

久久国产美女视频 | 久草在在线视频 | 色橹橹欧美在线观看视频高清 | 国产精品6999成人免费视频 | 在线亚洲观看 | 久久成人高清 | 国产精品欧美精品 | 久久精品网站免费观看 | 免费男女羞羞的视频网站中文字幕 | 国产黄a三级三级三级三级三级 | 99精品国产成人一区二区 | а天堂中文最新一区二区三区 | 国产一区黄色 | 中文乱幕日产无线码1区 | 成人精品999 | www.五月婷婷.com | 在线激情网 | 啪啪精品 | 99热超碰 | 91成年人视频 | 97电影网站 | 999成人国产 | 黄色在线观看www | 欧美大片在线观看一区 | 成人av在线网 | 成人av影视观看 | 一本色道久久精品 | 成人全视频免费观看在线看 | 黄色片毛片 | 欧美一二三四在线 | 久久激情久久 | 亚洲天天看 | 国产精品男女视频 | 激情综合一区 | 在线观看久久久久久 | 91热爆视频 | 国产精品 999 | 国产99色 | 国产色啪 | a级片在线播放 | 日韩一区精品 | 日韩在线观看你懂得 | 国产99久久久欧美黑人 | 激情开心网站 | 天天操天天干天天操天天干 | 992tv在线成人免费观看 | 免费黄色小网站 | 日韩精品第1页 | 久久6精品| 日日添夜夜添 | 国产aaa免费视频 | 成人免费一区二区三区在线观看 | 欧美性生交大片免网 | 亚洲精品午夜久久久久久久 | 天天爽人人爽夜夜爽 | 日韩色一区二区三区 | 日韩中文字幕免费看 | 夜色资源站国产www在线视频 | 在线亚洲精品 | 高清精品久久 | 国产亚洲精品久久久久久无几年桃 | 91成人在线看 | 国产高清在线a视频大全 | av成人在线播放 | 99国产精品一区 | 在线观看av中文字幕 | 亚洲视频 在线观看 | 97成人精品视频在线播放 | 国产精品久久久久亚洲影视 | 久久不射影院 | 天天爱天天操 | 久久精品国产精品亚洲精品 | 九九导航 | 美女久久久久 | 99精品欧美一区二区 | 91 中文字幕 | 中文字幕免费成人 | 亚洲日本一区二区在线 | 黄色三级在线观看 | 天天色天天骑天天射 | 国产成人亚洲在线电影 | 国产小视频国产精品 | 成片免费观看视频999 | 亚洲激情 在线 | 337p日本欧洲亚洲大胆裸体艺术 | 东方av免费在线观看 | 精品欧美乱码久久久久久 | 黄av在线| www狠狠操| 国产精品综合在线 | 国产精品一区二区无线 | 免费视频97 | 国产区高清在线 | 成人av播放 | 超碰在线cao | 久久久午夜影院 | 成人三级视频 | 日韩av图片 | 久久久精品日本 | 国产精品成人一区二区三区吃奶 | 天天操夜夜操 | 欧美日一级片 | 中文av网 | 伊人电影在线观看 | av.com在线| 日韩激情片在线观看 | 免费在线观看一级片 | 国产亚洲精品久久久网站好莱 | 91成版人在线观看入口 | 日韩在线播放欧美字幕 | av免费在线看网站 | 在线免费观看国产 | 97超碰网 | 2018亚洲男人天堂 | 亚洲综合色网站 | 99这里只有精品视频 | 片网址| 成人在线播放免费观看 | 久艹视频在线免费观看 | 91麻豆精品国产91久久久无限制版 | 最近av在线| 国产精品四虎 | 精品999| 五月婷丁香| 亚洲在线视频免费观看 | 国产福利资源 | 有码中文字幕在线观看 | 中文字幕在线观看三区 | 91在线播放国产 | 亚洲视频每日更新 | 天天色婷婷| 亚洲黄色app| 97超碰人人网 | 一级性生活片 | 久久久精品电影 | 黄色在线观看污 | 免费在线播放av电影 | 狠狠色伊人亚洲综合网站野外 | 国产一级视频在线免费观看 | 国产一级免费电影 | 中文字幕久久精品 | 国产区精品视频 | 99在线热播精品免费 | 免费成人在线视频网站 | 国产九九在线 | 97超碰资源总站 | 毛片1000部免费看 | 2021国产在线视频 | 久久久久久久久久久免费视频 | 久久69av | 欧美成人999| 国产精品黄色影片导航在线观看 | 91香蕉国产 | 国产精品毛片久久 | 麻豆久久 | 久色 网 | 亚州人成在线播放 | 午夜在线观看 | 天天干天天干天天射 | 午夜av免费 | 亚洲永久精品国产 | 日韩色视频在线观看 | 91精品一区二区在线观看 | 四虎影院在线观看av | 干干夜夜 | 91漂亮少妇露脸在线播放 | 免费福利片 | 91精品国产入口 | 99色在线视频 | 亚洲视频www| 精品视频免费久久久看 | 少妇超碰在线 | 欧美精品乱码久久久久久按摩 | 成人av在线影视 | 色偷偷人人澡久久超碰69 | 天天曰夜夜操 | 伊人五月天综合 | 国产伦理久久 | 人人干人人添 | 日韩视频免费 | 热久久国产 | 精品xxx| 国产精品第一视频 | 久草.com | 亚洲国产精品va在线 | 中文字幕乱码亚洲精品一区 | 黄色国产在线观看 | 欧美国产日韩一区 | 国产伦理久久精品久久久久_ | 免费在线观看成年人视频 | 黄色av网站在线观看免费 | 国产精品亚洲a | 综合伊人av | 日韩中文字幕一区 | 国产最新视频在线 | 亚洲激情 在线 | 7777精品伊人久久久大香线蕉 | 国产一区二区观看 | 18久久久| 欧美在线久久 | 五月开心婷婷网 | 亚洲国内精品在线 | 中文免费在线观看 | www视频免费在线观看 | 99久久精品免费看国产一区二区三区 | 女人18片毛片90分钟 | 日韩中文字幕电影 | 国产一级免费av | 在线成人免费电影 | 在线观看国产91 | 国产亚洲精品久久久久久久久久 | 黄色视屏在线免费观看 | 91九色porny在线 | 有没有在线观看av | 国产成人1区 | 精品色综合 | 久久精品国产精品亚洲精品 | 久久在线视频在线 | 中国一级特黄毛片大片久久 | 久久成人精品电影 | 久久精品久久久久电影 | 日本在线观看中文字幕 | 国产视频精品网 | 激情av一区二区 | 亚洲午夜av久久乱码 | 西西www4444大胆视频 | 日韩中文字幕国产精品 | 96视频在线 | 99亚洲精品| 久久黄色网址 | 天天操天天干天天爱 | 成人xxxx | 免费视频久久久久久久 | 开心激情五月婷婷 | 久久婷婷一区二区三区 | 91网页版免费观看 | 香蕉影视app| 亚洲美女视频在线 | 国产精品成人在线 | 亚洲成人资源在线 | 久久午夜网 | 玖玖精品视频 | 精品女同一区二区三区在线观看 | 色综合在 | 久久精品—区二区三区 | 午夜精品区 | 99av在线视频 | 成+人+色综合 | 久久美女免费视频 | 99热在线观看 | 99理论片 | 日韩手机在线 | 丁香六月婷婷开心 | 国产精品99久久久久的智能播放 | 狠狠色丁香婷婷综合基地 | 国产一区二区三区网站 | 人人爽人人乐 | 久久综合婷婷国产二区高清 | 狠狠狠干狠狠 | 中文av网 | 国产麻豆果冻传媒在线观看 | 久草免费福利在线观看 | 特级西西444www高清大视频 | 国产99久久九九精品 | 天天操夜夜操夜夜操 | 久久人人爽人人爽人人片av软件 | 久久高视频 | 国产成人黄色在线 | 久久久国产电影 | 丁香六月伊人 | 久久国产精品小视频 | 色99网| 亚洲成人免费观看 | 日韩中文字幕免费在线播放 | 一区二区免费不卡在线 | 国产视频二区三区 | 国产伦精品一区二区三区四区视频 | jizz18欧美18 | 国产伦精品一区二区三区… | 天天综合网在线 | 国产色拍 | 欧美一区二视频在线免费观看 | 免费看的黄色的网站 | 亚洲人xxx| 免费日韩一区二区 | 久久狠狠一本精品综合网 | 天天操天天玩 | 波多野结衣视频一区二区 | 黄色精品网站 | 国产精品一区二区久久精品 | 日韩欧美在线一区 | 91精品久久久久久综合五月天 | 久久久久久国产精品美女 | 91精品小视频| 国产成人一区二区三区在线观看 | 免费看一级黄色大全 | 久热免费在线 | 欧美在线一二 | 日韩视频免费在线 | 97免费视频在线播放 | 国产不卡高清 | 中文字幕乱视频 | 国产一区在线免费观看 | 奇米网777| 日本丰满少妇免费一区 | 日韩偷拍精品 | 欧美色图另类 | 国产亚洲精品福利 | 亚洲aⅴ在线| 六月丁香色婷婷 | 精品国产乱码一区二 | 色香蕉在线 | 香蕉视频在线免费 | 最近中文字幕大全中文字幕免费 | 波多野结衣精品在线 | 天天干,天天干 | 国产亚洲一级高清 | 2021国产精品视频 | 激情xxxx| 三级动图 | 91精品国产综合久久福利不卡 | 国产欧美三级 | 久久免费毛片视频 | 国产成人中文字幕 | 狠狠干夜夜操天天爽 | 一区在线电影 | 九九精品久久久 | 91视频在线观看下载 | 成人国产电影在线观看 | 日韩免费看视频 | 伊人中文在线 | 九色精品免费永久在线 | 国产午夜三级一二三区 | 日韩特级黄色片 | 久精品在线观看 | 91污污| 91男人影院| 一区二区三区免费在线观看视频 | 久久99免费观看 | 91色欧美| 成人小视频免费在线观看 | 国产成人久久av | 亚洲 综合 精品 | 91香蕉视频污在线 | 日韩高清无线码2023 | 免费h精品视频在线播放 | 久久久免费精品国产一区二区 | 亚洲国产中文字幕在线观看 | 亚洲欧美在线综合 | 69国产精品成人在线播放 | 日韩91av| 日本久久视频 | 久久婷婷色综合 | 中文字幕免费一区 | 国产视频二区三区 | 久久久久在线视频 | 91免费高清 | 日韩免费av在线 | 亚洲干视频在线观看 | 久一网站 | 激情婷婷亚洲 | 激情在线网站 | 久影院| 国产特级毛片aaaaaa高清 | 国产少妇在线观看 | 国产精品九色 | 一本一本久久a久久 | 久久久久久久久久久免费 | 色精品视频 | 色免费在线 | 欧美日韩久 | a视频免费 | 久久伊人八月婷婷综合激情 | 国产韩国日本高清视频 | 精品爱爱| 黄色大片日本 | 就操操久久 | 久久精品亚洲综合专区 | 欧美久久久久 | av中文字幕网 | 久久免费视频一区 | 欧美日韩调教 | 国产手机视频在线播放 | 最新av在线免费观看 | 日本资源中文字幕在线 | 中文在线免费一区三区 | 日韩免费观看高清 | 亚洲精品久久久久www | 天天操夜夜操夜夜操 | 五月婷婷六月丁香在线观看 | 青青草国产精品 | 久久精品视频在线观看免费 | 97免费在线视频 | 久草在线费播放视频 | 日韩美精品视频 | 色多多视频在线 | 精品视频123区在线观看 | 精品久久久久久亚洲综合网站 | 可以免费看av| 久久久高清一区二区三区 | 九九九热精品 | 日批在线看 | 2020天天干天天操 | 亚洲黄色精品 | 丝袜+亚洲+另类+欧美+变态 | 在线观看免费av网 | 91探花在线| 顶级bbw搡bbbb搡bbbb | 久久国产美女视频 | 天天干天天操av | 亚洲区另类春色综合小说 | 九九久久影视 | 激情图片qvod | 91成人免费在线视频 | 亚洲综合五月天 | 欧美日韩1区| 亚洲网站在线看 | 中文字幕一区三区 | 五月婷婷精品 | 97精品一区 | 黄www在线观看 | 久久久精品国产免费观看同学 | 欧美十八 | 91在线观看黄 | 狠狠干综合 | 日本久久久久久 | 黄色毛片网站在线观看 | 国产99免费 | 波多野结衣视频一区二区 | 伊人导航 | 日韩精品大片 | 欧美 日韩 性 | 久草在线看片 | 亚洲五月六月 | 中文字幕国产视频 | 精品久久久久久久久久国产 | 色综合网 | 欧美精品在线一区 | 久久精品国产99 | 一区 二区电影免费在线观看 | 日韩一区二区三区免费视频 | 国产成人在线免费观看 | 亚洲综合欧美日韩狠狠色 | 97免费在线观看 | 久免费 | zzijzzij亚洲日本少妇熟睡 | 久久免费的视频 | 深夜成人av | 99久久精品午夜一区二区小说 | 日韩av电影手机在线观看 | 日韩精品91偷拍在线观看 | 激情亚洲综合在线 | 四虎在线免费观看视频 | 热久久99这里有精品 | 免费网址在线播放 | 国产精品久久久久婷婷 | 91chinesexxx | 久久久久久久久久久久99 | 久久久久色 | 天天综合网 天天 | 国产很黄很色的视频 | 国产亚洲精品综合一区91 | www免费视频com | 狠狠综合久久av | 亚洲精品乱码久久久久久蜜桃动漫 | 全黄网站| 91天堂在线观看 | 一本大道久久精品懂色aⅴ 五月婷社区 | 98超碰在线观看 | 久久成人免费视频 | 亚洲欧美日本A∨在线观看 青青河边草观看完整版高清 | 国产亚洲精品女人久久久久久 | 中文字幕精品一区久久久久 | 永久精品视频 | 久一在线 | 亚洲视频中文 | 13日本xxxxxⅹxxx20 | 麻豆国产在线视频 | 国产精品观看视频 | 欧美精品色| 亚洲国产精品99久久久久久久久 | 国内精品视频久久 | 丁香婷婷激情五月 | 91福利在线导航 | 99色网站 | 91av蜜桃 | 欧美午夜精品久久久久久浪潮 | 午夜精品一区二区三区可下载 | 久热超碰 | 99久久精品费精品 | 久久久精品午夜 | avhd高清在线谜片 | 久久ww| 日韩黄色在线 | av超碰免费在线 | 伊人久久在线观看 | 精品视频国产一区 | av在线电影网站 | 激情视频国产 | 久久久麻豆| 久久亚洲欧美日韩精品专区 | 国产精品资源在线 | 国产一区二区日本 | 日韩精品一区二区三区免费观看 | 国产亚洲一区 | 日本一区二区不卡高清 | 亚洲精品国精品久久99热一 | 国产精品网在线观看 | 亚洲一区二区三区毛片 | 91在线操 | 91av亚洲| 久久美女视频 | 免费日韩电影 | 日韩精品欧美一区 | 91在线影院 | 亚洲 欧美日韩 国产 中文 | 欧美日韩国产精品爽爽 | 国产精品福利在线 | 成人黄色在线播放 | 最近最新中文字幕 | 99综合电影在线视频 | 少妇bbw揉bbb欧美 | 久久久久久国产精品久久 | 99视频精品免费视频 | 精品国产99 | 久精品视频免费观看2 | 日韩免费专区 | 色视频在线观看 | 久久久久久久久毛片精品 | 久久伊人五月天 | 亚洲精品国偷自产在线99热 | 又黄又爽又色无遮挡免费 | 国产人成精品一区二区三 | 亚洲一区日韩精品 | 人人舔人人舔 | 96亚洲精品久久久蜜桃 | 免费高清无人区完整版 | 在线视频你懂得 | 亚洲国产高清在线 | 黄p网站在线观看 | 国产一级视频在线 | 免费观看一区二区三区视频 | 国产午夜精品一区 | 日韩欧美在线综合网 | 久久丁香 | 国产亚洲精品久久久久久无几年桃 | 中文字幕观看在线 | 亚洲精品18日本一区app | 亚洲国产播放 | 国产综合久久 | 欧美亚洲一区二区在线 | 超碰人人在线 | 国产黑丝一区二区 | 精品在线小视频 | 91成人网在线观看 | 色大片免费看 | 午夜色性片| 亚洲人成网站精品片在线观看 | 中文国产字幕在线观看 | 97视频免费看 | 国产小视频在线观看免费 | 91麻豆精品国产91久久久更新时间 | 亚洲91精品| 亚洲一区日韩在线 | 四虎永久精品在线 | 国产一级精品绿帽视频 | 五月天国产 | 91在线观看视频网站 | 国产中文在线观看 | 中文字幕亚洲情99在线 | 国产精品婷婷 | 色综合狠狠干 | 欧美成人精品三级在线观看播放 | 91亚洲精 | 中文字幕在线观看网站 | 九九精品视频在线观看 | 亚洲影院国产 | 国产香蕉97碰碰碰视频在线观看 | 精品视频成人 | www.狠狠插.com | 久久精品毛片 | 欧美一区二区三区在线视频观看 | 国产精品九九久久久久久久 | 99久久精品国产欧美主题曲 | 亚洲欧美怡红院 | 中日韩欧美精彩视频 | 欧美一级免费在线 | 99热精品久久 | 午夜久久福利影院 | 激情婷婷色| 激情六月婷婷久久 | 西西大胆免费视频 | 91在线精品秘密一区二区 | 久久久久久欧美二区电影网 | 国产v视频 | 精品一区二区三区久久 | 国产成人一区二区啪在线观看 | 久久久久国产精品免费网站 | 国产一级性生活视频 | 男女精品久久 | 热re99久久精品国产66热 | 久久少妇免费视频 | 久久午夜电影院 | 久久九九免费视频 | 狠狠躁夜夜a产精品视频 | 欧美人交a欧美精品 | 久久国产精品影片 | 国产高清中文字幕 | 国产91粉嫩白浆在线观看 | 草免费视频| 国产精品不卡av | 成年人在线观看免费视频 | 亚洲精品国产精品国自产观看浪潮 | 欧美99热 | 免费观看一级视频 | 久久亚洲私人国产精品va | 午夜久久精品 | 国产直播av | 在线超碰av | 国产在线观看不卡 | 在线 视频 亚洲 | 日韩精品第一区 | 亚洲人成网站精品片在线观看 | 亚洲va欧美va国产va黑人 | 中文字幕久久网 | 最新色视频 | 成人av视屏| 亚洲日本精品视频 | 欧美精品久久久久久久久久 | 99爱视频 | 国产精品第十页 | 麻豆手机在线 | 色99中文字幕 | 欧美韩日视频 | 久久不射电影院 | 中文字幕乱码在线播放 | 色在线观看网站 | 在线观看成人 | 国产精品午夜久久 | 免费看黄在线观看 | 国产成人在线观看免费 | 久久天天躁夜夜躁狠狠躁2022 | 一区二区三区手机在线观看 | 国产99久久久国产精品免费看 | 国产精品一区二区av | 黄色大片视频网站 | 亚洲激情视频在线 | 在线视频你懂得 | 国产精品乱码久久久 | 午夜精品久久久久久久久久 | 97国产精品亚洲精品 | 欧美精品一区二区三区一线天视频 | 中文一区在线 | a级成人毛片 | 久热爱 | 91麻豆精品国产自产在线游戏 | 国产精品欧美在线 | 久久综合五月婷婷 | 人人澡人人草 | 狠狠色狠狠综合久久 | 五月婷婷视频在线观看 | 国产精品乱码一区二区视频 | 91丨九色丨蝌蚪丨老版 | 黄色三级免费观看 | 免费观看的av网站 | 激情欧美一区二区三区 | 高清一区二区三区av | 天天色棕合合合合合合 | 午夜黄网| 国产精品久久久久久久免费观看 | 18久久久| 97超碰在线资源 | av高清免费在线 | 亚洲精品视频在线免费 | 在线免费色 | 免费中文字幕视频 | 狠狠色丁香婷婷综合久小说久 | 99热精品国产一区二区在线观看 | 日韩视频三区 | 99国内精品久久久久久久 | 亚洲精品国偷拍自产在线观看蜜桃 | 成人avav| 久久久免费在线观看 | 国产在线a视频 | 国产精品99免费看 | 国产一区免费在线 | 在线观看不卡视频 | av在线中文| 亚av在线| 99热这里只有精品8 久久综合毛片 | 香蕉视频在线看 | 国产精品自拍在线 | 97色国产 | 色综合天天狠狠 | 国产精品黄色在线观看 | 久久综合爱 | 国产精品久久久久久久久费观看 | 在线观看国产v片 | 久久久精品福利视频 | 麻豆超碰| 九九爱免费视频在线观看 | 国内精品久久久久影院日本资源 | 久久综合导航 | 在线观看91网站 | 日韩女同一区二区三区在线观看 | 91理论片午午伦夜理片久久 | 亚洲黑丝少妇 | 丁香六月激情婷婷 | 夜夜视频欧洲 | 日日干夜夜爱 | av超碰在线 | 伊人热| av五月婷婷 | 国产高清久久久久 | 亚洲在线精品视频 | 三级av中文字幕 | 在线不卡中文字幕播放 | 在线国产黄色 | 激情网站免费观看 | 国内精品久久天天躁人人爽 | 91精品专区 | 91久久国产露脸精品国产闺蜜 | 亚洲成人av在线电影 | 99r在线播放 | 97精品在线观看 | 日本精品视频在线观看 | 国产看片 色 | 日韩中文字幕a | 天天干天天干天天 | 免费网站在线观看人 | 国产美腿白丝袜足在线av | 国产精品黑丝在线观看 | 中文字幕在线观看资源 | 五月天色网站 | 99热在线精品观看 | 中文在线中文a | 91精品视频网站 | 亚洲高清免费在线 | 91综合视频在线观看 | 色七七亚洲影院 | 久久精品香蕉 | 久久久人人人 | 久久久精品网站 | 字幕网资源站中文字幕 | 欧美日韩三区二区 | 精品一区av | 亚洲成人精品 | 黄网站色视频免费观看 | 久久色在线播放 | 麻豆国产视频 | 精品爱爱 | 色射爱| 成人精品亚洲 | 中文字幕资源在线 | 97人人模人人爽人人喊中文字 | a视频免费在线观看 | 九草在线视频 | 在线色吧| 激情影音先锋 | 免费色视频 | 久久国内免费视频 | av色图天堂网 | 亚洲黄色一级大片 | 欧美一级大片在线观看 | 亚洲男男gaygay无套 | 国产97在线视频 | 国产在线观看二区 | 中文字幕在线色 | 国产综合片 | 国产精品久久久电影 | 欧美精品久久久久久 | 99国产精品久久久久老师 | 黄色影院在线观看 | 99精品观看 | 丁香视频免费观看 | 97狠狠操 | 国产裸体视频网站 | 久久免费99精品久久久久久 | 日本中文字幕在线播放 | 一级α片免费看 | 高清日韩一区二区 | 中文字幕在线观 | 欧美日韩69 | 国产视频高清 | 日韩精品视频第一页 | 国内精品久久久久影院日本资源 | 国产一区在线看 | 亚洲专区欧美专区 | 免费av在线播放 | 日本性久久 | 97人人模人人爽人人喊网 | 91男人影院 | 精品一区二区免费 | 久久天天躁狠狠躁亚洲综合公司 | 欧美日韩性视频在线 | 日本久热| 九色免费视频 | 又湿又紧又大又爽a视频国产 | 国产精品video爽爽爽爽 | 婷婷丁香色综合狠狠色 | 久久av黄色| 最新日韩在线观看 | 九九热在线免费观看 | 大胆欧美gogo免费视频一二区 | 2022国产精品视频 | 免费又黄又爽 | 国精产品999国精产品视频 | 91大神精品视频在线观看 | 久久免费精品视频 | 亚洲免费国产视频 | 久久久男人的天堂 | 亚洲精品视频在线观看免费 | 97精品国产一二三产区 | 国产理论一区二区三区 | 欧美另类v | 国产精品一区二区精品视频免费看 | 成人在线观看免费 | 中文永久免费观看 | 在线观看免费黄视频 | 亚洲国产精品久久久久 | 91成人欧美 | 日韩国产在线观看 | 久久综合中文字幕 | 久久都是精品 | 成人午夜精品久久久久久久3d | 欧美久久久久久久久久久 | 伊人国产视频 | 91人人澡人人爽人人精品 | 在线看国产视频 | 国产小视频福利在线 | 成人av免费在线看 | 日日爽 | 欧美精品v国产精品v日韩精品 | 亚洲综合小说电影qvod | 精品在线视频播放 | 456免费视频 | 91看片在线观看 | 国产黄在线 | 欧美性极品xxxx娇小 | 免费日韩一区二区 | 久久福利影视 | 日韩中文字幕在线 | 日韩一级片观看 | av大片免费在线观看 | 国产精品久久久久免费 | 精品久久一区二区 | 日韩av电影中文字幕在线观看 | 深夜免费网站 | 欧美无极色 | 国产裸体bbb视频 | 免费看黄在线看 | 99视频在线精品国自产拍免费观看 | 亚洲精品电影在线 | 久久免费毛片视频 | 午夜av大片 | 国产成人三级在线 | 亚洲国产精品日韩 | 久久精品视频网站 | 欧美成人免费在线 | 日韩精品一区二区三区在线播放 | 91xav| av观看久久久 | 福利区在线观看 | 亚洲精品短视频 | 婷婷国产在线 | 国产精品久久久久av | 久久国产精品免费视频 | 一个色综合网站 | 国产精品久久久久久久久久 | 亚洲高清在线精品 | 成人国产精品一区二区 | 国产精品av电影 | 久久热亚洲 | 正在播放国产精品 | 中文字幕一二 | 性色av一区二区三区在线观看 | 在线精品一区二区 | 久久久天堂 | 综合成人在线 | 中日韩三级视频 | 91视频高清| 97香蕉久久国产在线观看 | 日韩一区二区免费视频 | 免费在线观看的av网站 | 探花视频在线版播放免费观看 | av中文在线影视 | 国产精品美女毛片真酒店 | 中文字幕黄色网 | 四虎影院在线观看av | 亚洲国产偷 | 国产一区二区网址 | 毛片基地黄久久久久久天堂 | 黄色片视频在线观看 | 综合久久五月天 | 久草网视频在线观看 | 日日干,天天干 | 少妇按摩av | 欧美另类老妇 | 一区二区视频播放 | 91精品福利在线 | 不卡av电影在线观看 | 欧美国产精品一区二区 | 国产一区欧美二区 | 久久这里只有精品久久 | 天天精品视频 | 奇米网网址 | 97免费在线观看 | 日本资源中文字幕在线 | 日韩在线观看免费 | 18做爰免费视频网站 | 91网免费观看 | 精品国产综合区久久久久久 | 天天干人人插 | 手机在线永久免费观看av片 | 亚洲综合成人婷婷小说 | 午夜色大片在线观看 | 狠狠色丁香婷婷综合视频 | 欧美日韩在线视频免费 | 国产精品久久久久久久久久久久午 | 国产精品欧美在线 | 麻豆国产露脸在线观看 | 国产精品字幕 | 九九精品视频在线看 | 亚洲国产精品激情在线观看 | 国产精品久久久久久五月尺 | 超碰在线观看99 | 亚洲精品456在线播放 | 在线 高清 中文字幕 | 国产午夜精品一区二区三区嫩草 | 91看片淫黄大片一级在线观看 | 天天操天天干天天干 | 天天干天天射天天爽 | 在线天堂中文在线资源网 | 精品一区二区精品 | 国产精品美女久久久久久久 | 久久国产区 | 久久久免费少妇 | 91精品在线免费视频 | 91看片网址 | 日韩精品不卡在线观看 | 精品久久久久久久久久国产 | 久久视频这里有精品 | 97狠狠操 | 一区二区三区视频在线 | 亚洲精品自拍视频在线观看 | 欧美性色黄大片在线观看 | 国产在线色 | 欧美日韩在线网站 | 国产视频观看 | 探花视频在线观看+在线播放 | 97日日碰人人模人人澡分享吧 | 成人91av| 久久久久免费视频 | 在线免费国产视频 | 日韩一区二区在线免费观看 | 少妇资源站 | 亚洲v精品 | 在线天堂中文在线资源网 | 久久久久久久久久久网 | av一区二区三区在线播放 | 久久开心激情 | 亚洲做受高潮欧美裸体 | 超碰大片| 999久久久久久久久 69av视频在线观看 | 午夜国产福利视频 | 成人av片免费观看app下载 | 国产高清绿奴videos | 日韩性xxxx | 99久久综合狠狠综合久久 | 午夜视频在线观看一区二区三区 | 国产原厂视频在线观看 | 亚洲成人黄色网址 | 亚洲日本va午夜在线影院 | 久久草在线免费 | 99热在线网站 | 日本高清中文字幕有码在线 | 免费在线国产视频 | 日本三级在线观看中文字 | 日韩视频在线播放 | 99精品久久久久久久久久综合 | 国产色就色 | 亚洲午夜av久久乱码 | 欧美在线视频a | 99r在线精品 | 亚洲午夜激情网 | 手机看国产毛片 | 黄色片亚洲 | 最新真实国产在线视频 | 999视频精品 | 五月天电影免费在线观看一区 | 在线黄色观看 | 91在线播放国产 | 99视频黄 | 亚洲免费精品一区二区 | 欧美精选一区二区三区 | 午夜丁香网| 欧美色图88| 国产理论一区二区三区 |