日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

今日arXiv精选 | 28篇EMNLP 2021最新论文

發布時間:2024/10/8 编程问答 47 豆豆
生活随笔 收集整理的這篇文章主要介紹了 今日arXiv精选 | 28篇EMNLP 2021最新论文 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

?關于?#今日arXiv精選?

這是「AI 學術前沿」旗下的一檔欄目,編輯將每日從arXiv中精選高質量論文,推送給讀者。

Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning

Comment: EMNLP 2021. Code and data are available at ?https://github.com/WadeYin9712/GD-VCR

Link:?http://arxiv.org/abs/2109.06860

Abstract

Commonsense is defined as the knowledge that is shared by everyone. However,certain types of commonsense knowledge are correlated with culture andgeographic locations and they are only shared locally. For example, thescenarios of wedding ceremonies vary across regions due to different customsinfluenced by historical and religious factors. Such regional characteristics,however, are generally omitted in prior work. In this paper, we construct aGeo-Diverse Visual Commonsense Reasoning dataset (GD-VCR) to testvision-and-language models' ability to understand cultural andgeo-location-specific commonsense. In particular, we study two state-of-the-artVision-and-Language models, VisualBERT and ViLBERT trained on VCR, a standardmultimodal commonsense benchmark with images primarily from Western regions. Wethen evaluate how well the trained models can generalize to answering thequestions in GD-VCR. We find that the performance of both models fornon-Western regions including East Asia, South Asia, and Africa issignificantly lower than that for Western region. We analyze the reasons behindthe performance disparity and find that the performance gap is larger on QApairs that: 1) are concerned with culture-related scenarios, e.g., weddings,religious activities, and festivals; 2) require high-level geo-diversecommonsense reasoning rather than low-order perception and recognition. Datasetand code are released at https://github.com/WadeYin9712/GD-VCR.

Summarize-then-Answer: Generating Concise Explanations for Multi-hop Reading Comprehension

Comment: Accepted to EMNLP2021 Long Paper (Main Track)

Link:?http://arxiv.org/abs/2109.06853

Abstract

How can we generate concise explanations for multi-hop Reading Comprehension(RC)? The current strategies of identifying supporting sentences can be seen asan extractive question-focused summarization of the input text. However, theseextractive explanations are not necessarily concise i.e. not minimallysufficient for answering a question. Instead, we advocate for an abstractiveapproach, where we propose to generate a question-focused, abstractive summaryof input paragraphs and then feed it to an RC system. Given a limited amount ofhuman-annotated abstractive explanations, we train the abstractive explainer ina semi-supervised manner, where we start from the supervised model and thentrain it further through trial and error maximizing a conciseness-promotedreward function. Our experiments demonstrate that the proposed abstractiveexplainer can generate more compact explanations than an extractive explainerwith limited supervision (only 2k instances) while maintaining sufficiency.

The Perils of Using Mechanical Turk to Evaluate Open-Ended Text Generation

Comment: EMNLP 2021 (20 pages)

Link:?http://arxiv.org/abs/2109.06835

Abstract

Recent text generation research has increasingly focused on open-endeddomains such as story and poetry generation. Because models built for suchtasks are difficult to evaluate automatically, most researchers in the spacejustify their modeling choices by collecting crowdsourced human judgments oftext quality (e.g., Likert scores of coherence or grammaticality) from AmazonMechanical Turk (AMT). In this paper, we first conduct a survey of 45open-ended text generation papers and find that the vast majority of them failto report crucial details about their AMT tasks, hindering reproducibility. Wethen run a series of story evaluation experiments with both AMT workers andEnglish teachers and discover that even with strict qualification filters, AMTworkers (unlike teachers) fail to distinguish between model-generated text andhuman-generated references. We show that AMT worker judgments improve when theyare shown model-generated output alongside human-generated references, whichenables the workers to better calibrate their ratings. Finally, interviews withthe English teachers provide deeper insights into the challenges of theevaluation process, particularly when rating model-generated text.

Types of Out-of-Distribution Texts and How to Detect Them

Comment: EMNLP 2021

Link:?http://arxiv.org/abs/2109.06827

Abstract

Despite agreement on the importance of detecting out-of-distribution (OOD)examples, there is little consensus on the formal definition of OOD examplesand how to best detect them. We categorize these examples by whether theyexhibit a background shift or a semantic shift, and find that the two majorapproaches to OOD detection, model calibration and density estimation (languagemodeling for text), have distinct behavior on these types of OOD data. Across14 pairs of in-distribution and OOD English natural language understandingdatasets, we find that density estimation methods consistently beat calibrationmethods in background shift settings, while performing worse in semantic shiftsettings. In addition, we find that both methods generally fail to detectexamples from challenge data, highlighting a weak spot for current methods.Since no single method works well across all settings, our results call for anexplicit definition of OOD examples when evaluating different detectionmethods.

LM-Critic: Language Models for Unsupervised Grammatical Error Correction

Comment: EMNLP 2021. Code & data available at ?https://github.com/michiyasunaga/LM-Critic

Link:?http://arxiv.org/abs/2109.06822

Abstract

Training a model for grammatical error correction (GEC) requires a set oflabeled ungrammatical / grammatical sentence pairs, but manually annotatingsuch pairs can be expensive. Recently, the Break-It-Fix-It (BIFI) framework hasdemonstrated strong results on learning to repair a broken program without anylabeled examples, but this relies on a perfect critic (e.g., a compiler) thatreturns whether an example is valid or not, which does not exist for the GECtask. In this work, we show how to leverage a pretrained language model (LM) indefining an LM-Critic, which judges a sentence to be grammatical if the LMassigns it a higher probability than its local perturbations. We apply thisLM-Critic and BIFI along with a large set of unlabeled sentences to bootstraprealistic ungrammatical / grammatical pairs for training a corrector. Weevaluate our approach on GEC datasets across multiple domains (CoNLL-2014,BEA-2019, GMEG-wiki and GMEG-yahoo) and show that it outperforms existingmethods in both the unsupervised setting (+7.7 F0.5) and the supervised setting(+0.5 F0.5).

Everything Is All It Takes: A Multipronged Strategy for Zero-Shot Cross-Lingual Information Extraction

Comment: EMNLP 2021

Link:?http://arxiv.org/abs/2109.06798

Abstract

Zero-shot cross-lingual information extraction (IE) describes theconstruction of an IE model for some target language, given existingannotations exclusively in some other language, typically English. While theadvance of pretrained multilingual encoders suggests an easy optimism of "trainon English, run on any language", we find through a thorough exploration andextension of techniques that a combination of approaches, both new and old,leads to better performance than any one cross-lingual strategy in particular.We explore techniques including data projection and self-training, and howdifferent pretrained encoders impact them. We use English-to-Arabic IE as ourinitial example, demonstrating strong performance in this setting for eventextraction, named entity recognition, part-of-speech tagging, and dependencyparsing. We then apply data projection and self-training to three tasks acrosseight target languages. Because no single set of techniques performs the bestacross all tasks, we encourage practitioners to explore various configurationsof the techniques described in this work when seeking to improve on zero-shottraining.

Adaptive Information Seeking for Open-Domain Question Answering

Comment: Accepted at EMNLP 2021

Link:?http://arxiv.org/abs/2109.06747

Abstract

Information seeking is an essential step for open-domain question answeringto efficiently gather evidence from a large corpus. Recently, iterativeapproaches have been proven to be effective for complex questions, byrecursively retrieving new evidence at each step. However, almost all existingiterative approaches use predefined strategies, either applying the sameretrieval function multiple times or fixing the order of different retrievalfunctions, which cannot fulfill the diverse requirements of various questions.In this paper, we propose a novel adaptive information-seeking strategy foropen-domain question answering, namely AISO. Specifically, the whole retrievaland answer process is modeled as a partially observed Markov decision process,where three types of retrieval operations (e.g., BM25, DPR, and hyperlink) andone answer operation are defined as actions. According to the learned policy,AISO could adaptively select a proper retrieval action to seek the missingevidence at each step, based on the collected evidence and the reformulatedquery, or directly output the answer when the evidence set is sufficient forthe question. Experiments on SQuAD Open and HotpotQA fullwiki, which serve assingle-hop and multi-hop open-domain QA benchmarks, show that AISO outperformsall baseline methods with predefined strategies in terms of both retrieval andanswer evaluations.

A Novel Global Feature-Oriented Relational Triple Extraction Model based on Table Filling

Comment: EMNLP2021

Link:?http://arxiv.org/abs/2109.06705

Abstract

Table filling based relational triple extraction methods are attractinggrowing research interests due to their promising performance and theirabilities on extracting triples from complex sentences. However, this kind ofmethods are far from their full potential because most of them only focus onusing local features but ignore the global associations of relations and oftoken pairs, which increases the possibility of overlooking some importantinformation during triple extraction. To overcome this deficiency, we propose aglobal feature-oriented triple extraction model that makes full use of thementioned two kinds of global associations. Specifically, we first generate atable feature for each relation. Then two kinds of global associations aremined from the generated table features. Next, the mined global associationsare integrated into the table feature of each relation. This"generate-mine-integrate" process is performed multiple times so that the tablefeature of each relation is refined step by step. Finally, each relation'stable is filled based on its refined table feature, and all triples linked tothis relation are extracted based on its filled table. We evaluate the proposedmodel on three benchmark datasets. Experimental results show our model iseffective and it achieves state-of-the-art results on all of these datasets.The source code of our work is available at: https://github.com/neukg/GRTE.

KFCNet: Knowledge Filtering and Contrastive Learning Network for Generative Commonsense Reasoning

Comment: Accepted to EMNLP 2021 Findings

Link:?http://arxiv.org/abs/2109.06704

Abstract

Pre-trained language models have led to substantial gains over a broad rangeof natural language processing (NLP) tasks, but have been shown to havelimitations for natural language generation tasks with high-qualityrequirements on the output, such as commonsense generation and ad keywordgeneration. In this work, we present a novel Knowledge Filtering andContrastive learning Network (KFCNet) which references external knowledge andachieves better generation performance. Specifically, we propose a BERT-basedfilter model to remove low-quality candidates, and apply contrastive learningseparately to each of the encoder and decoder, within a generalencoder--decoder architecture. The encoder contrastive module helps to captureglobal target semantics during encoding, and the decoder contrastive moduleenhances the utility of retrieved prototypes while learning general features.Extensive experiments on the CommonGen benchmark show that our modeloutperforms the previous state of the art by a large margin: +6.6 points (42.5vs. 35.9) for BLEU-4, +3.7 points (33.3 vs. 29.6) for SPICE, and +1.3 points(18.3 vs. 17.0) for CIDEr. We further verify the effectiveness of the proposedcontrastive module on ad keyword generation, and show that our model haspotential commercial value.

Efficient Inference for Multilingual Neural Machine Translation

Comment: Accepted as a long paper to EMNLP 2021

Link:?http://arxiv.org/abs/2109.06679

Abstract

Multilingual NMT has become an attractive solution for MT deployment inproduction. But to match bilingual quality, it comes at the cost of larger andslower models. In this work, we consider several ways to make multilingual NMTfaster at inference without degrading its quality. We experiment with several"light decoder" architectures in two 20-language multi-parallel settings:small-scale on TED Talks and large-scale on ParaCrawl. Our experimentsdemonstrate that combining a shallow decoder with vocabulary filtering leads tomore than twice faster inference with no loss in translation quality. Wevalidate our findings with BLEU and chrF (on 380 language pairs), robustnessevaluation and human evaluation.

MDAPT: Multilingual Domain Adaptive Pretraining in a Single Model

Comment: Findings of EMNLP 2021

Link:?http://arxiv.org/abs/2109.06605

Abstract

Domain adaptive pretraining, i.e. the continued unsupervised pretraining of alanguage model on domain-specific text, improves the modelling of text fordownstream tasks within the domain. Numerous real-world applications are basedon domain-specific text, e.g. working with financial or biomedical documents,and these applications often need to support multiple languages. However,large-scale domain-specific multilingual pretraining data for such scenarioscan be difficult to obtain, due to regulations, legislation, or simply a lackof language- and domain-specific text. One solution is to train a singlemultilingual model, taking advantage of the data available in as many languagesas possible. In this work, we explore the benefits of domain adaptivepretraining with a focus on adapting to multiple languages within a specificdomain. We propose different techniques to compose pretraining corpora thatenable a language model to both become domain-specific and multilingual.Evaluation on nine domain-specific datasets-for biomedical named entityrecognition and financial sentence classification-covering seven differentlanguages show that a single multilingual domain-specific model can outperformthe general multilingual model, and performs close to its monolingualcounterpart. This finding holds across two different pretraining methods,adapter-based pretraining and full model pretraining.

Non-Parametric Unsupervised Domain Adaptation for Neural Machine Translation

Comment: Findings of EMNLP 2021

Link:?http://arxiv.org/abs/2109.06604

Abstract

Recently, $k$NN-MT has shown the promising capability of directlyincorporating the pre-trained neural machine translation (NMT) model withdomain-specific token-level $k$-nearest-neighbor ($k$NN) retrieval to achievedomain adaptation without retraining. Despite being conceptually attractive, itheavily relies on high-quality in-domain parallel corpora, limiting itscapability on unsupervised domain adaptation, where in-domain parallel corporaare scarce or nonexistent. In this paper, we propose a novel framework thatdirectly uses in-domain monolingual sentences in the target language toconstruct an effective datastore for $k$-nearest-neighbor retrieval. To thisend, we first introduce an autoencoder task based on the target language, andthen insert lightweight adapters into the original NMT model to map thetoken-level representation of this task to the ideal representation oftranslation task. Experiments on multi-domain datasets demonstrate that ourproposed approach significantly improves the translation accuracy withtarget-side monolingual data, while achieving comparable performance withback-translation.

Just What do You Think You're Doing, Dave?' A Checklist for Responsible Data Use in NLP

Comment: Findings of EMNLP 2021

Link:?http://arxiv.org/abs/2109.06598

Abstract

A key part of the NLP ethics movement is responsible use of data, but exactlywhat that means or how it can be best achieved remain unclear. This positionpaper discusses the core legal and ethical principles for collection andsharing of textual data, and the tensions between them. We propose a potentialchecklist for responsible data (re-)use that could both standardise the peerreview of conference submissions, as well as enable a more in-depth view ofpublished research across the community. Our proposal aims to contribute to thedevelopment of a consistent standard for data (re-)use, embraced across NLPconferences.

Learning Bill Similarity with Annotated and Augmented Corpora of Bills

Comment: Accepted at EMNLP 2021(Long paper)

Link:?http://arxiv.org/abs/2109.06527

Abstract

Bill writing is a critical element of representative democracy. However, itis often overlooked that most legislative bills are derived, or even directlycopied, from other bills. Despite the significance of bill-to-bill linkages forunderstanding the legislative process, existing approaches fail to addresssemantic similarities across bills, let alone reordering or paraphrasing whichare prevalent in legal document writing. In this paper, we overcome theselimitations by proposing a 5-class classification task that closely reflectsthe nature of the bill generation process. In doing so, we construct ahuman-labeled dataset of 4,721 bill-to-bill relationships at thesubp-level and release this annotated dataset to the research community.To augment the dataset, we generate synthetic data with varying degrees ofsimilarity, mimicking the complex bill writing process. We use BERT variantsand apply multi-stage training, sequentially fine-tuning our models withsynthetic and human-labeled datasets. We find that the predictive performancesignificantly improves when training with both human-labeled and syntheticdata. Finally, we apply our trained model to infer p- and bill-levelsimilarities. Our analysis shows that the proposed methodology successfullycaptures the similarities across legal documents at various levels ofaggregation.

Different Strokes for Different Folks: Investigating Appropriate Further Pre-training Approaches for Diverse Dialogue Tasks

Comment: Accepted as a long paper at EMNLP 2021 (Main Conference)

Link:?http://arxiv.org/abs/2109.06524

Abstract

Loading models pre-trained on the large-scale corpus in the general domainand fine-tuning them on specific downstream tasks is gradually becoming aparadigm in Natural Language Processing. Previous investigations prove thatintroducing a further pre-training phase between pre-training and fine-tuningphases to adapt the model on the domain-specific unlabeled data can bringpositive effects. However, most of these further pre-training works just keeprunning the conventional pre-training task, e.g., masked language model, whichcan be regarded as the domain adaptation to bridge the data distribution gap.After observing diverse downstream tasks, we suggest that different tasks mayalso need a further pre-training phase with appropriate training tasks tobridge the task formulation gap. To investigate this, we carry out a study forimproving multiple task-oriented dialogue downstream tasks through designingvarious tasks at the further pre-training phase. The experiment shows thatdifferent downstream tasks prefer different further pre-training tasks, whichhave intrinsic correlation and most further pre-training tasks significantlyimprove certain target tasks rather than all. Our investigation indicates thatit is of great importance and effectiveness to design appropriate furtherpre-training tasks modeling specific information that benefit downstream tasks.Besides, we present multiple constructive empirical conclusions for enhancingtask-oriented dialogues.

Netmarble AI Center's WMT21 Automatic Post-Editing Shared Task Submission

Comment: WMT21 Automatic Post-Editing Shared Task System Paper (at EMNLP2021 ?Workshop)

Link:?http://arxiv.org/abs/2109.06515

Abstract

This paper describes Netmarble's submission to WMT21 Automatic Post-Editing(APE) Shared Task for the English-German language pair. First, we propose aCurriculum Training Strategy in training stages. Facebook Fair's WMT19 newstranslation model was chosen to engage the large and powerful pre-trainedneural networks. Then, we post-train the translation model with differentlevels of data at each training stages. As the training stages go on, we makethe system learn to solve multiple tasks by adding extra information atdifferent training stages gradually. We also show a way to utilize theadditional data in large volume for APE tasks. For further improvement, weapply Multi-Task Learning Strategy with the Dynamic Weight Average during thefine-tuning stage. To fine-tune the APE corpus with limited data, we add somerelated subtasks to learn a unified representation. Finally, for betterperformance, we leverage external translations as augmented machine translation(MT) during the post-training and fine-tuning. As experimental results show,our APE system significantly improves the translations of provided MT resultsby -2.848 and +3.74 on the development dataset in terms of TER and BLEU,respectively. It also demonstrates its effectiveness on the test dataset withhigher quality than the development dataset.

Tribrid: Stance Classification with Neural Inconsistency Detection

Comment: Accepted at EMNLP 2021

Link:?http://arxiv.org/abs/2109.06508

Abstract

We study the problem of performing automatic stance classification on socialmedia with neural architectures such as BERT. Although these architecturesdeliver impressive results, their level is not yet comparable to the one ofhumans and they might produce errors that have a significant impact on thedownstream task (e.g., fact-checking). To improve the performance, we present anew neural architecture where the input also includes automatically generatednegated perspectives over a given claim. The model is jointly learned to makesimultaneously multiple predictions, which can be used either to improve theclassification of the original perspective or to filter out doubtfulpredictions. In the first case, we propose a weakly supervised method forcombining the predictions into a final one. In the second case, we show thatusing the confidence scores to remove doubtful predictions allows our method toachieve human-like performance over the retained information, which is still asizable part of the original input.

AligNART: Non-autoregressive Neural Machine Translation by Jointly Learning to Estimate Alignment and Translate

Comment: Accepted by EMNLP 2021

Link:?http://arxiv.org/abs/2109.06481

Abstract

Non-autoregressive neural machine translation (NART) models suffer from themulti-modality problem which causes translation inconsistency such as tokenrepetition. Most recent approaches have attempted to solve this problem byimplicitly modeling dependencies between outputs. In this paper, we introduceAligNART, which leverages full alignment information to explicitly reduce themodality of the target distribution. AligNART divides the machine translationtask into $(i)$ alignment estimation and $(ii)$ translation with aligneddecoder inputs, guiding the decoder to focus on simplified one-to-onetranslation. To alleviate the alignment estimation problem, we further proposea novel alignment decomposition method. Our experiments show that AligNARToutperforms previous non-iterative NART models that focus on explicit modalityreduction on WMT14 En$\leftrightarrow$De and WMT16 Ro$\rightarrow$En.Furthermore, AligNART achieves BLEU scores comparable to those of thestate-of-the-art connectionist temporal classification based models on WMT14En$\leftrightarrow$De. We also observe that AligNART effectively addresses thetoken repetition problem even without sequence-level knowledge distillation.

Logic-level Evidence Retrieval and Graph-based Verification Network for Table-based Fact Verification

Comment: EMNLP 2021

Link:?http://arxiv.org/abs/2109.06480

Abstract

Table-based fact verification task aims to verify whether the given statementis supported by the given semi-structured table. Symbolic reasoning withlogical operations plays a crucial role in this task. Existing methods leverageprograms that contain rich logical information to enhance the verificationprocess. However, due to the lack of fully supervised signals in the programgeneration process, spurious programs can be derived and employed, which leadsto the inability of the model to catch helpful logical operations. To addressthe aforementioned problems, in this work, we formulate the table-based factverification task as an evidence retrieval and reasoning framework, proposingthe Logic-level Evidence Retrieval and Graph-based Verification network(LERGV). Specifically, we first retrieve logic-level program-like evidence fromthe given table and statement as supplementary evidence for the table. Afterthat, we construct a logic-level graph to capture the logical relations betweenentities and functions in the retrieved evidence, and design a graph-basedverification network to perform logic-level graph-based reasoning based on theconstructed graph to classify the final entailment relation. Experimentalresults on the large-scale benchmark TABFACT show the effectiveness of theproposed approach.

Task-adaptive Pre-training and Self-training are Complementary for Natural Language Understanding

Comment: Findings of EMNLP 2021

Link:?http://arxiv.org/abs/2109.06466

Abstract

Task-adaptive pre-training (TAPT) and Self-training (ST) have emerged as themajor semi-supervised approaches to improve natural language understanding(NLU) tasks with massive amount of unlabeled data. However, it's unclearwhether they learn similar representations or they can be effectively combined.In this paper, we show that TAPT and ST can be complementary with simple TFSprotocol by following TAPT ->Finetuning ->Self-training (TFS) process.Experimental results show that TFS protocol can effectively utilize unlabeleddata to achieve strong combined gains consistently across six datasets coveringsentiment classification, paraphrase identification, natural languageinference, named entity recognition and dialogue slot classification. Weinvestigate various semi-supervised settings and consistently show that gainsfrom TAPT and ST can be strongly additive by following TFS procedure. We hopethat TFS could serve as an important semi-supervised baseline for future NLPstudies.

Uncovering Implicit Gender Bias in Narratives through Commonsense Inference

Comment: Accepted at Findings of EMNLP 2021

Link:?http://arxiv.org/abs/2109.06437

Abstract

Pre-trained language models learn socially harmful biases from their trainingcorpora, and may repeat these biases when used for generation. We study genderbiases associated with the protagonist in model-generated stories. Such biasesmay be expressed either explicitly ("women can't park") or implicitly (e.g. anunsolicited male character guides her into a parking space). We focus onimplicit biases, and use a commonsense reasoning engine to uncover them.Specifically, we infer and analyze the protagonist's motivations, attributes,mental states, and implications on others. Our findings regarding implicitbiases are in line with prior work that studied explicit biases, for exampleshowing that female characters' portrayal is centered around appearance, whilemale figures' focus on intellect.

Gradient Imitation Reinforcement Learning for Low Resource Relation Extraction

Comment: In EMNLP 2021 as a long paper. Code and data available at ?https://github.com/THU-BPM/GradLRE

Link:?http://arxiv.org/abs/2109.06415

Abstract

Low-resource Relation Extraction (LRE) aims to extract relation facts fromlimited labeled corpora when human annotation is scarce. Existing works eitherutilize self-training scheme to generate pseudo labels that will cause thegradual drift problem, or leverage meta-learning scheme which does not solicitfeedback explicitly. To alleviate selection bias due to the lack of feedbackloops in existing LRE learning paradigms, we developed a Gradient ImitationReinforcement Learning method to encourage pseudo label data to imitate thegradient descent direction on labeled data and bootstrap its optimizationcapability through trial and error. We also propose a framework called GradLRE,which handles two major scenarios in low-resource relation extraction. Besidesthe scenario where unlabeled data is sufficient, GradLRE handles the situationwhere no unlabeled data is available, by exploiting a contextualizedaugmentation method to generate data. Experimental results on two publicdatasets demonstrate the effectiveness of GradLRE on low resource relationextraction when comparing with baselines.

Progressively Guide to Attend: An Iterative Alignment Framework for Temporal Sentence Grounding

Comment: Accepted as a long paper in the main conference of EMNLP 2021

Link:?http://arxiv.org/abs/2109.06400

Abstract

A key solution to temporal sentence grounding (TSG) exists in how to learneffective alignment between vision and language features extracted from anuntrimmed video and a sentence description. Existing methods mainly leveragevanilla soft attention to perform the alignment in a single-step process.However, such single-step attention is insufficient in practice, sincecomplicated relations between inter- and intra-modality are usually obtainedthrough multi-step reasoning. In this paper, we propose an Iterative AlignmentNetwork (IA-Net) for TSG task, which iteratively interacts inter- andintra-modal features within multiple steps for more accurate grounding.Specifically, during the iterative reasoning process, we pad multi-modalfeatures with learnable parameters to alleviate the nowhere-to-attend problemof non-matched frame-word pairs, and enhance the basic co-attention mechanismin a parallel manner. To further calibrate the misaligned attention caused byeach reasoning step, we also devise a calibration module following eachattention module to refine the alignment knowledge. With such iterativealignment scheme, our IA-Net can robustly capture the fine-grained relationsbetween vision and language domains step-by-step for progressively reasoningthe temporal boundaries. Extensive experiments conducted on three challengingbenchmarks demonstrate that our proposed model performs better than thestate-of-the-arts.

Adaptive Proposal Generation Network for Temporal Sentence Localization in Videos

Comment: Accepted as a long paper in the main conference of EMNLP 2021

Link:?http://arxiv.org/abs/2109.06398

Abstract

We address the problem of temporal sentence localization in videos (TSLV).Traditional methods follow a top-down framework which localizes the targetsegment with pre-defined segment proposals. Although they have achieved decentperformance, the proposals are handcrafted and redundant. Recently, bottom-upframework attracts increasing attention due to its superior efficiency. Itdirectly predicts the probabilities for each frame as a boundary. However, theperformance of bottom-up model is inferior to the top-down counterpart as itfails to exploit the segment-level interaction. In this paper, we propose anAdaptive Proposal Generation Network (APGN) to maintain the segment-levelinteraction while speeding up the efficiency. Specifically, we first perform aforeground-background classification upon the video and regress on theforeground frames to adaptively generate proposals. In this way, thehandcrafted proposal design is discarded and the redundant proposals aredecreased. Then, a proposal consolidation module is further developed toenhance the semantic of the generated proposals. Finally, we locate the targetmoments with these generated proposals following the top-down framework.Extensive experiments on three challenging benchmarks show that our proposedAPGN significantly outperforms previous state-of-the-art methods.

Rationales for Sequential Predictions

Comment: To appear in the 2021 Conference on Empirical Methods in Natural ?Language Processing (EMNLP 2021)

Link:?http://arxiv.org/abs/2109.06387

Abstract

Sequence models are a critical component of modern NLP systems, but theirpredictions are difficult to explain. We consider model explanations thoughrationales, subsets of context that can explain individual model predictions.We find sequential rationales by solving a combinatorial optimization: the bestrationale is the smallest subset of input tokens that would predict the sameoutput as the full sequence. Enumerating all subsets is intractable, so wepropose an efficient greedy algorithm to approximate this objective. Thealgorithm, which is called greedy rationalization, applies to any model. Forthis approach to be effective, the model should form compatible conditionaldistributions when making predictions on incomplete subsets of the context.This condition can be enforced with a short fine-tuning step. We study greedyrationalization on language modeling and machine translation. Compared toexisting baselines, greedy rationalization is best at optimizing thecombinatorial objective and provides the most faithful rationales. On a newdataset of annotated sequential rationales, greedy rationales are most similarto human rationales.

Compression, Transduction, and Creation: A Unified Framework for Evaluating Natural Language Generation

Comment: EMNLP 2021, Code available at ?https://github.com/tanyuqian/ctc-gen-eval

Link:?http://arxiv.org/abs/2109.06379

Abstract

Natural language generation (NLG) spans a broad range of tasks, each of whichserves for specific objectives and desires different properties of generatedtext. The complexity makes automatic evaluation of NLG particularlychallenging. Previous work has typically focused on a single task and developedindividual evaluation metrics based on specific intuitions. In this paper, wepropose a unifying perspective based on the nature of information change in NLGtasks, including compression (e.g., summarization), transduction (e.g., textrewriting), and creation (e.g., dialog). Information alignment between input,context, and output text plays a common central role in characterizing thegeneration. With automatic alignment prediction models, we develop a family ofinterpretable metrics that are suitable for evaluating key aspects of differentNLG tasks, often without need of gold reference data. Experiments show theuniformly designed metrics achieve stronger or comparable correlations withhuman judgement compared to state-of-the-art metrics in each of diverse tasks,including text summarization, style transfer, and knowledge-grounded dialog.

Question Answering over Electronic Devices: A New Benchmark Dataset and a Multi-Task Learning based QA Framework

Comment: EMNLP Findings 2021, Long

Link:?http://arxiv.org/abs/2109.05897

Abstract

Answering questions asked from instructional corpora such as E-manuals,recipe books, etc., has been far less studied than open-domain factoidcontext-based question answering. This can be primarily attributed to theabsence of standard benchmark datasets. In this paper we meticulously create alarge amount of data connected with E-manuals and develop suitable algorithm toexploit it. We collect E-Manual Corpus, a huge corpus of 307,957 E-manuals andpretrain RoBERTa on this large corpus. We create various benchmark QA datasetswhich include question answer pairs curated by experts based upon twoE-manuals, real user questions from Community Question Answering Forumpertaining to E-manuals etc. We introduce EMQAP (E-Manual Question AnsweringPipeline) that answers questions pertaining to electronics devices. Built uponthe pretrained RoBERTa, it harbors a supervised multi-task learning frameworkwhich efficiently performs the dual tasks of identifying the p in theE-manual where the answer can be found and the exact answer span within thatp. For E-Manual annotated question-answer pairs, we show an improvementof about 40% in ROUGE-L F1 scores over the most competitive baseline. Weperform a detailed ablation study and establish the versatility of EMQAP acrossdifferent circumstances. The code and datasets are shared athttps://github.com/abhi1nandy2/EMNLP-2021-Findings, and the correspondingproject website is https://sites.google.com/view/emanualqa/home.

Mitigating Language-Dependent Ethnic Bias in BERT

Comment: 17 pages including references and appendix. To appear in EMNLP 2021 ?(camera-ready ver.)

Link:?http://arxiv.org/abs/2109.05704

Abstract

BERT and other large-scale language models (LMs) contain gender and racialbias. They also exhibit other dimensions of social bias, most of which have notbeen studied in depth, and some of which vary depending on the language. Inthis paper, we study ethnic bias and how it varies across languages byanalyzing and mitigating ethnic bias in monolingual BERT for English, German,Spanish, Korean, Turkish, and Chinese. To observe and quantify ethnic bias, wedevelop a novel metric called Categorical Bias score. Then we propose twomethods for mitigation; first using a multilingual model, and second usingcontextual word alignment of two monolingual models. We compare our proposedmethods with monolingual BERT and show that these methods effectively alleviatethe ethnic bias. Which of the two methods works better depends on the amount ofNLP resources available for that language. We additionally experiment withArabic and Greek to verify that our proposed methods work for a wider varietyof languages.

·

總結

以上是生活随笔為你收集整理的今日arXiv精选 | 28篇EMNLP 2021最新论文的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

麻豆免费精品视频 | 五月激情婷婷丁香 | 亚a在线| 91亚洲在线观看 | 成人黄色电影视频 | 奇米影视8888在线观看大全免费 | 人人澡人人干 | 99热999| 亚洲精品欧美成人 | 99久久国产免费,99久久国产免费大片 | 国产国产人免费人成免费视频 | 国产麻豆果冻传媒在线观看 | 天天插天天爱 | 精品国产一区二区三区男人吃奶 | 99产精品成人啪免费网站 | 成人91免费视频 | 色操插 | 欧美日韩一二三四区 | 日韩在线观看电影 | 夜夜澡人模人人添人人看 | 一级a毛片高清视频 | www.狠狠色.com| 东方av在线免费观看 | 国产美腿白丝袜足在线av | 五月婷网站| 激情综合网在线观看 | 欧美99精品| 丁香婷婷在线 | 日本女人逼 | 91九色视频导航 | 日韩av不卡在线 | 国产亚洲精品久久久久动 | 国产第一福利 | 日韩视频一区二区 | 国产艹b视频 | 蜜桃视频在线视频 | 国产精品资源网 | 国产蜜臀av | 1024久久 | 久久综合九色综合97婷婷女人 | 日韩高清不卡一区二区三区 | 91视频一8mav | 成年人在线电影 | 在线观看黄色av | 国模视频一区二区三区 | 色婷婷中文 | 日韩女同一区二区三区在线观看 | 97超碰精品 | 国产又粗又猛又爽又黄的视频免费 | 91精品国产三级a在线观看 | 成全在线视频免费观看 | 五月天激情综合 | 天天草综合网 | 午夜av日韩 | 麻豆成人小视频 | 高清不卡一区二区在线 | 手机看片国产 | 午夜精品视频福利 | 在线国产日本 | 在线播放国产一区二区三区 | 91在线看视频免费 | 91传媒在线 | 在线观看中文字幕第一页 | 麻豆94tv免费版 | 国产女人18毛片水真多18精品 | 在线黄色国产电影 | 天天看天天干 | 永久免费的av电影 | 五月婷影院 | 成人av免费在线播放 | 激情片av| 精品一区久久 | 少妇bbb搡bbbb搡bbbb′ | 国产无遮挡又黄又爽在线观看 | 亚洲国产精品成人av | 美女免费网视频 | 亚洲精品国偷拍自产在线观看蜜桃 | 日韩3区 | 最新日韩在线观看视频 | 在线观看免费黄色 | 天天做天天射 | 亚洲一区日韩在线 | 欧美国产日韩中文 | 国产一区二区三区网站 | 国产91精品一区二区麻豆亚洲 | 97精品国产aⅴ | 久久国产精品99久久人人澡 | 国产成人精品在线 | 久久久久久美女 | 中文字幕一区二区三区四区久久 | 波多野结衣在线观看视频 | 国产又粗又硬又爽视频 | 久久国产精品二国产精品中国洋人 | 国产视 | 久久精选视频 | 免费看片亚洲 | 久草在线免费电影 | 亚洲aⅴ免费在线观看 | 国产伦精品一区二区三区免费 | 91污在线观看 | 精品视频久久 | 中文字幕在线播放av | 亚洲国产精品成人va在线观看 | 中文字幕久久精品亚洲乱码 | 在线你懂的视频 | 51精品国自产在线 | 五月婷婷天堂 | 久久精品国产亚洲aⅴ | 天天操天天拍 | 日韩系列在线观看 | a视频在线观看 | 不卡中文字幕在线 | 国内精品久久久久久久久久久 | 中文在线字幕免费观 | 国产免费看 | 综合天天久久 | 亚洲午夜精品久久久 | 亚洲一级二级 | 亚洲国产高清在线观看视频 | 特黄特色特刺激视频免费播放 | 精品欧美一区二区在线观看 | 亚洲国产一区在线观看 | 日韩视频免费在线 | 日本韩国欧美在线观看 | a极黄色片 | 天天干天天做天天操 | 在线91视频| 久久免费激情视频 | 波多在线视频 | 丝袜制服天堂 | 日韩欧美99| 91理论片午午伦夜理片久久 | 色夜影院| 在线国产91| 黄色大片国产 | 国产专区视频在线观看 | 中日韩在线视频 | 亚洲精品欧美专区 | 成人91免费视频 | 国产91精品看黄网站 | 欧美高清视频不卡网 | 亚洲永久精品在线观看 | 日韩精品免费一区 | 亚洲黄色av网址 | 在线观看国产区 | 国产成人久久久77777 | 日韩在线观看中文字幕 | 久久精品专区 | 欧美了一区在线观看 | 国产免码va在线观看免费 | 亚洲电影第一页av | 中文字幕在线观看亚洲 | 国产伦理剧 | 欧美人人 | 99久久影院 | 手机在线免费av | a√国产免费a | 成人久久久精品国产乱码一区二区 | 国产 欧美 日产久久 | 婷婷在线免费视频 | 免费在线视频一区二区 | 久久久久久高潮国产精品视 | 高清免费在线视频 | 丁香九月婷婷 | 国产视频97 | 久久国产精品免费观看 | 91精品国产99久久久久久久 | 一区二区丝袜 | 国产精品理论视频 | 久久午夜电影网 | 特级西西444www大精品视频免费看 | 玖玖在线播放 | 日韩在线看片 | 久久久久久久久免费视频 | 久久久久久国产精品999 | 国产综合福利在线 | 国产在线精品一区二区 | 国产高清视频色在线www | 久久少妇av | 国产资源在线免费观看 | 91成人精品一区在线播放69 | 国产不卡毛片 | 国产在线精品一区 | 免费色婷婷 | 一本色道久久精品 | 日韩av中文字幕在线免费观看 | 亚洲综合精品在线 | 中文字幕无吗 | 国产精选在线观看 | 人人干干人人 | 黄色亚洲免费 | 日韩在线观看中文 | 久久久免费 | 韩国精品视频在线观看 | 天天视频色版 | 在线视频 亚洲 | 婷婷精品国产一区二区三区日韩 | 日本在线观看视频一区 | 91成人精品在线 | 国产精品 中文在线 | 中文字幕无吗 | 国产成人精品av | 在线观看激情av | 亚洲久在线 | 久久一级片 | 国产亚洲精品久久 | 久久久久久国产精品美女 | 日韩 在线 | 国产一级高清视频 | 精品美女视频 | 精品国产精品久久一区免费式 | 国产一区二区三区在线免费观看 | 亚洲一级久久 | 2020天天干天天操 | 亚洲精品一区二区三区四区高清 | 成人h视频| 日韩在线首页 | 欧美一区二区三区在线看 | 中文字幕 欧美性 | 成人欧美一区二区三区黑人麻豆 | 日韩三级精品 | 欧美在线一 | 午夜国产一区二区 | 久久免费精品一区二区三区 | 日韩精品一区二区三区电影 | 成人电影毛片 | 亚洲三级在线免费观看 | 五月婷婷激情综合网 | 久久国内视频 | 国产精品久久婷婷六月丁香 | 日本黄色片一区二区 | 亚洲欧美视频网站 | 黄网在线免费观看 | 97久久精品午夜一区二区 | 亚洲国产精品第一区二区 | 人人爽久久涩噜噜噜网站 | 精品视频不卡 | 亚洲精品久久久久久中文传媒 | 免费视频一级片 | 国产美女视频黄a视频免费 久久综合九色欧美综合狠狠 | 中文字幕亚洲精品日韩 | 麻花传媒mv免费观看 | 色在线免费观看 | 久久中文字幕在线视频 | 九九九热精品 | 成年人在线免费视频观看 | 在线v片| 91看片在线播放 | 香蕉在线影院 | 国内精品久久久久影院一蜜桃 | 国产精品免费一区二区三区 | 国产精品久久久久久久午夜 | 亚洲经典精品 | 欧美精品亚洲精品 | 精品久久综合 | 亚洲精品综合久久 | 热久久在线视频 | 蜜臀av夜夜澡人人爽人人桃色 | 久久国产女人 | 亚洲精品视频中文字幕 | 亚洲欧洲美洲av | www日日| 尤物九九久久国产精品的分类 | 日本动漫做毛片一区二区 | 网站免费黄色 | 美女视频网 | 亚洲天天综合 | 亚洲天堂网视频在线观看 | 欧美日韩一区二区在线观看 | 亚洲欧洲一级 | 日韩精品一区二区三区视频播放 | 日韩大片在线免费观看 | 极品久久久久 | 国产大尺度视频 | 日韩成人免费在线 | 国产精品一区二区在线播放 | 人人艹视频 | 国产亚洲婷婷免费 | 成年人电影免费在线观看 | 在线观看精品国产 | 在线 精品 国产 | 色综合人人 | www日韩| 狠狠狠色丁香综合久久天下网 | 久久久久国产精品午夜一区 | 色五月成人 | 搡bbbb搡bbb视频| 久艹在线免费观看 | 天天色天天射天天操 | 欧美乱淫视频 | 久久久久久久久久影院 | 一区二区 精品 | 91九色成人蝌蚪首页 | 欧美成年人在线观看 | 国产免费又爽又刺激在线观看 | 国产精品青草综合久久久久99 | 色在线亚洲 | 日韩免费在线看 | 久久精品视频在线播放 | 国产视频久久久 | 亚洲精品资源在线观看 | 国产亚洲视频在线免费观看 | 国产精品免费视频一区二区 | 在线观看视频一区二区三区 | 免费在线观看av | 久久狠狠亚洲综合 | 伊人亚洲综合网 | 手机av电影在线 | 99爱这里只有精品 | 日韩美视频 | 91资源在线播放 | 欧美成年黄网站色视频 | 日韩精品久久久 | 欧美视频www | 96香蕉视频| av在线官网 | 91高清免费 | 成人免费看黄 | 日韩高清一二区 | 免费不卡中文字幕视频 | 亚洲 欧洲 国产 日本 综合 | 午夜丁香视频在线观看 | 懂色av懂色av粉嫩av分享吧 | avsex| 在线免费观看视频你懂的 | 在线免费试看 | 久久激五月天综合精品 | 亚洲黄色免费在线 | 成人a视频片观看免费 | 亚洲精品国产综合99久久夜夜嗨 | 精品日韩在线 | 久久精品日产第一区二区三区乱码 | 超碰在线94 | 国产精品精品视频 | 久久99久久99精品免观看粉嫩 | 国产午夜三级一区二区三桃花影视 | 日韩av专区 | 天天色天天骑天天射 | 又黄又爽又刺激视频 | av大片免费在线观看 | 黄污网 | 干天天 | 综合激情av | 国产综合片 | 久久欧美综合 | 成人国产精品久久久久久亚洲 | 香蕉视频网址 | 日本高清中文字幕有码在线 | 国产精品1区2区3区 久久免费视频7 | 久青草视频在线观看 | 亚洲国内在线 | 欧美日韩国产一区二区三区在线观看 | 九九九免费视频 | 精品久久网 | 人人爽人人爽人人 | 二区三区视频 | 成 人 免费 黄 色 视频 | 日日碰狠狠添天天爽超碰97久久 | 国产视频 久久久 | 在线观看91 | 91黄色影视| 婷婷av网站| 欧美伦理一区 | 国产又粗又长又硬免费视频 | 亚洲免费av电影 | 日韩av视屏| 婷婷免费在线视频 | 97在线视频免费观看 | 蜜臀av在线一区二区三区 | 九九久久久久久久久激情 | 美女精品网站 | 视频一区久久 | 久久久国产精品人人片99精片欧美一 | 国产伦理久久精品久久久久_ | 99精品热视频只有精品10 | 波多野结衣视频在线 | 国产精品综合久久久久 | 特级片免费看 | 国产欧美日韩精品一区二区免费 | 日韩理论在线播放 | 国产网站在线免费观看 | 国产粉嫩在线观看 | 久视频在线 | 日韩av一区二区在线 | 午夜影院先 | 一级免费观看 | 少妇精品久久久一区二区免费 | 日本视频网 | 99视频精品免费视频 | 国产精品18久久久久vr手机版特色 | 久久久久久高潮国产精品视 | 亚洲影院一区 | 国产高清小视频 | 美女精品国产 | 久在线观看视频 | 亚洲国产日韩欧美 | 欧美 日韩 视频 | 一区二区成人国产精品 | 久久免费精品 | 中文字幕日韩高清 | 1区2区视频 | 婷婷日日| 国产一级视频免费看 | 国产69久久久欧美一级 | 一级久久精品 | 天天草天天干天天射 | 久久久麻豆 | 久久中文精品视频 | 在线观看中文字幕一区 | 国产精品免费不卡 | 午夜精品视频一区二区三区在线看 | 精品久久久久国产免费第一页 | 黄色a一级视频 | 欧美色图p | 亚洲精品高清一区二区三区四区 | 狠狠躁夜夜躁人人爽视频 | 五月花丁香婷婷 | 在线av资源 | 亚洲va欧美 | 色婷婷久久久 | 精品国产亚洲日本 | 四虎影视成人精品国库在线观看 | 国产福利a | 99视频一区 | 99精品欧美一区二区 | 国产18精品乱码免费看 | 日本黄色一级电影 | 99久久日韩精品免费热麻豆美女 | 久久视频免费在线 | 亚洲电影影音先锋 | 黄网站免费久久 | 免费中文字幕在线观看 | 日韩一区二区三免费高清在线观看 | 国产特级毛片aaaaaa高清 | av网站在线观看免费 | 国产精品嫩草55av | 九九热免费视频在线观看 | 97av超碰| 久久不射电影网 | 精品欧美小视频在线观看 | 国产成人高清av | 欧美日韩精 | 天天草天天爽 | 欧美日韩二区在线 | 韩国精品视频在线观看 | 亚洲精品美女久久久久 | 亚洲综合色播 | 在线观看国产成人av片 | 999视频精品| 亚洲最大免费成人网 | www.com黄色 | 久久手机精品视频 | 免费成人在线视频网站 | av中文字幕在线免费观看 | 亚洲国产成人精品在线 | 国产精品一区二区久久国产 | 99久久99热这里只有精品 | 欧美精品久久人人躁人人爽 | 欧美一级片免费观看 | 97人人模人人爽人人少妇 | 日本精a在线观看 | 久久久久国产精品免费网站 | 九色视频网站 | 国产精品va在线播放 | 九九久久电影 | 国产在线视频资源 | 亚洲精品mv在线观看 | av成人免费在线 | 人人爽人人干 | 国产 欧美 日韩 | 992tv人人网tv亚洲精品 | 久久国色夜色精品国产 | 91精品国产99久久久久久久 | 国产高清在线看 | 夜夜夜影院 | 国产精品久久久久永久免费观看 | 亚洲精品视频久久 | 91人人揉日日捏人人看 | 国产精品一区二区免费在线观看 | 免费看一级黄色大全 | 国产v在线播放 | 亚洲欧洲精品一区二区 | 国产欧美日韩一区 | 久久国产精品一国产精品 | 97国产精品视频 | 射射射综合网 | 亚洲影视九九影院在线观看 | 狠狠干天天射 | 激情视频在线高清看 | 免费看片网站91 | 亚洲精品在线电影 | 日本3级在线观看 | 免费看国产曰批40分钟 | 久草在线视频在线 | 99免费观看视频 | 欧美不卡视频在线 | 17videosex性欧美 | 日韩三级.com | 国产麻豆剧果冻传媒视频播放量 | 亚洲高清在线精品 | 丝袜美腿在线 | 超碰97国产 | 亚洲精品美女免费 | 黄色午夜 | 六月激情网 | 亚洲午夜精品久久久久久久久久久久 | 中文字幕 影院 | 久久久官网 | 97激情影院 | 一区二区av | 91在线视频在线 | 日本老少交 | 91在线小视频 | 国产精品乱码高清在线看 | 毛片精品免费在线观看 | 天天操天天舔天天干 | 国产精品黄色av | 91热爆视频 | 日批网站免费观看 | 97超视频| 欧美另类一二三四区 | 天天拍天天爽 | 国内精品福利视频 | 国产综合婷婷 | 日韩电影在线一区二区 | 爱爱一区| 久久在线观看视频 | 亚洲精品456在线播放乱码 | 丁香 久久 综合 | 久草免费在线观看视频 | 免费视频黄色 | 国产原创在线 | 欧美精品做受xxx性少妇 | 中文视频在线播放 | 久久久久久久网 | 国产精品成人国产乱 | 麻豆精品传媒视频 | 久久精品视频免费播放 | 怡红院成人在线 | 黄色精品一区 | 麻豆视频在线播放 | 精品国产99国产精品 | 在线观看的黄色 | 99 色| 日韩在线小视频 | 精品国产人成亚洲区 | 日韩av网站在线播放 | 精品视频在线看 | 黄色三级在线看 | 天天干夜夜爽 | 免费av片在线 | 有没有在线观看av | 日本精品久久久一区二区三区 | 蜜桃视频精品 | 日韩在线 | 亚州精品视频 | 天天夜夜亚洲 | 日韩中文幕 | 欧美激情视频一二三区 | 91热精品| 99久高清在线观看视频99精品热在线观看视频 | 精品久久五月天 | 国产一区二区在线免费视频 | 麻豆视频在线免费观看 | 中日韩免费视频 | 黄a在线看 | 免费在线国产精品 | 黄色av一区二区 | 91视频在线国产 | 亚洲黄色成人网 | 人人插人人草 | 国产精品一区免费在线观看 | 国产午夜一区二区 | 香蕉手机在线 | 成年人在线观看视频免费 | 中文字幕在线免费播放 | 久久成人一区二区 | 最新国产精品亚洲 | av福利第一导航 | 一区二区三区在线视频111 | 国产精品一区二区久久精品爱涩 | 久久这里只有精品首页 | 99精品国产一区二区三区不卡 | 日本性动态图 | 亚洲伊人天堂 | 亚洲综合视频在线观看 | 亚洲一级黄色大片 | 欧美一区二区在线刺激视频 | 毛片网站在线观看 | 婷婷国产精品 | 亚洲精品高清视频 | 欧美专区日韩专区 | 探花视频免费观看高清视频 | 成人精品视频久久久久 | 91超碰在线播放 | 国产午夜影院 | 国产精品久久久久av免费 | 波多野结衣一区二区 | 九九久久久 | 久久综合电影 | 国产小视频免费观看 | 97电影在线看视频 | 黄污视频大全 | 国产伦理一区二区三区 | 91视频免费看 | 国产真实精品久久二三区 | 亚洲一级片av | 国产欧美在线一区 | 久久精品综合一区 | 日韩精品久久久久久久电影99爱 | 深夜免费福利 | 国产精品3 | 狠狠干成人 | 亚洲最新在线 | 久久激情视频 久久 | 97超级碰碰碰视频在线观看 | 在线国产专区 | 黄色三级在线看 | 九草在线视频 | 999视频在线播放 | 亚洲伊人天堂 | 中文字幕一区二区三区四区视频 | 99免费精品 | 久久久久久久久久久久久久免费看 | 日本69hd | 久久久久一区 | 天天插日日插 | 一级黄色在线视频 | 色狠狠干 | 国产视频精品免费 | av中文字幕免费在线观看 | 免费看三级黄色片 | 久久96国产精品久久99漫画 | 丝袜护士aⅴ在线白丝护士 天天综合精品 | 日韩日韩日韩日韩 | 91麻豆国产福利在线观看 | 国产精品男女啪啪 | 91精品国产入口 | 国产夫妻性生活自拍 | 亚洲成av人片在线观看无 | 欧美久久久久久久久中文字幕 | 欧洲在线免费视频 | 亚洲精品视频免费在线 | 欧美中文字幕久久 | 天天做日日爱夜夜爽 | aaa日本高清在线播放免费观看 | av综合av| 丁五月婷婷 | 国产小视频在线免费观看 | 国产一区私人高清影院 | 91在线播放综合 | 99久久99视频| 狠狠色噜噜狠狠狠 | 国产精品久久久久免费a∨ 欧美一级性生活片 | 日韩欧美视频一区 | 最近中文字幕完整视频高清1 | 日本精品一区二区 | 91视频久久久久久 | 伊人亚洲综合网 | 亚洲日本精品 | 欧美一级特黄高清视频 | 国产一区二区免费 | 中文字幕丰满人伦在线 | 日韩1级片 | 婷婷久操| 玖玖在线免费视频 | 日本久久久久 | 中文av在线免费观看 | 亚洲va韩国va欧美va精四季 | 一区二区欧美在线观看 | 国产永久免费高清在线观看视频 | 精品国偷自产国产一区 | 欧美日本一区 | 国产精品色婷婷 | 夜夜操天天 | 成人一区不卡 | 91视频啊啊啊 | www.在线看片.com | 欧美国产精品久久久久久免费 | 国产在线观看中文字幕 | 在线免费视频你懂的 | 亚洲高清在线精品 | 日韩成片| 免费在线观看毛片网站 | 久久精品国产久精国产 | 国产在线观看91 | 欧美一二三区播放 | 日韩精品久久一区二区三区 | 51久久成人国产精品麻豆 | 久久在线一区 | 久久人人爽视频 | 成人在线视频免费看 | 亚洲人成人在线 | 亚洲一区av | av中文天堂 | 在线亚洲成人 | 久久精品一区二区三 | 精品视频资源站 | 亚洲精品视频在线观看视频 | 黄色大片免费播放 | 免费观看xxxx9999片 | 免费在线观看日韩欧美 | 成人视屏免费看 | 999一区二区三区 | 久久精品国产一区二区电影 | 麻豆视频成人 | 亚洲涩涩网 | 中文字幕成人在线观看 | 香蕉视频国产在线观看 | 免费观看www视频 | 国产精品成人一区二区三区吃奶 | 欧美精品一区二区性色 | 综合婷婷| 欧美激情精品久久 | 欧美激情精品久久久久久 | 国产在线理论片 | 色网址99 | 在线观看免费成人 | 久久人人插 | 97影视 | 丁香婷婷深情五月亚洲 | 国产一级一片免费播放放 | 日韩中文字幕在线看 | a√国产免费a | 亚洲视频精品在线 | 欧美性精品 | 九九九热精品免费视频观看网站 | 国产精品 9999 | 激情开心色 | 精品成人久久 | 丁香六月国产 | 国产一区在线免费 | 中文字幕精品一区 | 色先锋资源网 | 日韩欧美成人网 | 日本精品在线视频 | 久久免费黄色网址 | av在线最新 | 久草电影在线观看 | 国产精品一区二区久久精品爱涩 | 国产黄色精品在线观看 | 国产成人一区二区三区在线观看 | 超碰人人乐 | 日韩av福利在线 | 91九色成人蝌蚪首页 | 婷婷国产精品 | 一区二区三区观看 | 婷婷视频在线 | 日韩视频中文字幕在线观看 | 欧美日韩在线视频一区二区 | 男女拍拍免费视频 | 女人18精品一区二区三区 | 国产精品一区二区三区电影 | 久草精品在线播放 | 久久久免费精品视频 | a v在线观看| 国产成人精品网站 | 最近高清中文在线字幕在线观看 | 综合成人在线 | 97天天干 | 国产69精品久久久久9999apgf | 在线免费成人 | 美女在线免费观看视频 | 免费情趣视频 | 欧美性视频网站 | 特级免费毛片 | 久久久久免费视频 | 中文字幕123区 | 国产成人亚洲在线观看 | 深夜福利视频一区二区 | 日韩成人av在线 | 精品一区二区久久久久久久网站 | 91亚·色 | 欧美日韩国产综合网 | 亚洲aⅴ在线观看 | 最新av免费在线观看 | 中文字幕字幕中文 | 国产精品视频免费在线观看 | 91成人在线视频观看 | 国内外成人免费在线视频 | www.天堂av| 手机色站| 国产精品中文字幕在线播放 | 97人人超| 亚洲国产欧美一区二区三区丁香婷 | 欧美精品乱码99久久影院 | 欧美日性视频 | 999热视频 | 国产成人亚洲在线观看 | 国产成人精品亚洲 | 国产精品久久久久久久午夜 | 日韩欧美一二三 | 久久久久国产一区二区三区 | 草莓视频在线观看免费观看 | 又色又爽又激情的59视频 | 精品亚洲成人 | 久热免费在线观看 | 日韩欧美精品在线 | 亚洲欧美乱综合图片区小说区 | 国产第一页福利影院 | 国产精品麻豆三级一区视频 | 黄色不卡av | 成年人网站免费观看 | 国产午夜精品理论片在线 | 国产成人精品综合久久久久99 | 精品在线一区二区 | 亚洲国产免费网站 | 国产系列精品av | 91女人18片女毛片60分钟 | 波多野结衣一区二区 | 国产精美视频 | 色婷婷a | 欧美日韩高清一区二区 国产亚洲免费看 | 九九热久久免费视频 | 91在线精品一区二区 | 国产视频欧美视频 | 欧美少妇影院 | 亚洲久草网 | 国产一级淫片在线观看 | 久久都是精品 | 五月婷在线视频 | 国产麻豆剧果冻传媒视频播放量 | av中文字幕免费在线观看 | 在线免费视频你懂的 | 色综合久久悠悠 | 欧美成人h版在线观看 | 在线电影av | 97免费 | 九九在线高清精品视频 | 国产成人一区二区精品非洲 | 国产伦理久久精品久久久久_ | 久久免费播放视频 | 久九视频 | 美女一二三区 | www.夜夜夜| 久久视频免费在线观看 | 日操操 | 久久精品一二三区 | 国产成人精品网站 | 中文字幕观看视频 | 国产一性一爱一乱一交 | 女人18片毛片90分钟 | 成人av高清| 欧美成人高清 | 午夜三级大片 | 欧美日韩一区二区三区在线免费观看 | 欧美综合色在线图区 | 国产精品成人av在线 | 精品在线不卡 | 欧美精品久久久久久久久久白贞 | 日日夜夜婷婷 | av 一区二区三区 | 天天射天天操天天 | 97精品国产91久久久久久久 | 久草视频精品 | .精品久久久麻豆国产精品 亚洲va欧美 | 狠狠干,狠狠操 | 在线v片 | 96精品视频 | 手机在线小视频 | 夜夜操天天 | 久久乱码卡一卡2卡三卡四 五月婷婷久 | 91精品小视频 | 欧美黑人性爽 | 日韩高清黄色 | 色综合久 | 日韩二区精品 | 国产午夜在线观看 | 精品一区欧美 | 欧美日本一二三 | 国产精品亚洲综合久久 | 人人爽人人爱 | 国产最新在线观看 | 91色欧美| 视频在线观看亚洲 | 美女网站色| 国产视频精品在线 | www.夜夜操| 国产精品99久久久久久久久 | 亚洲激情久久 | 国产午夜免费视频 | 久久免费大片 | 亚洲欧洲中文日韩久久av乱码 | 欧美大荫蒂xxx | 日韩视频专区 | 欧美一级黄色网 | 久久久亚洲国产精品麻豆综合天堂 | 91精彩视频| 精品视频成人 | 久久久久久久久久久久av | 久久人人精品 | 8x8x在线观看视频 | 最新成人av | 亚洲国产美女精品久久久久∴ | 亚洲伦理一区二区 | 日韩欧美在线中文字幕 | 久久精品人人做人人综合老师 | 丝袜护士aⅴ在线白丝护士 天天综合精品 | 玖玖爱在线观看 | 日韩欧美精品免费 | 九九免费精品 | 中文字幕一区二区三区四区在线视频 | 五月婷在线播放 | 精品视频久久 | 亚洲视频在线免费观看 | 亚洲男男gaygay无套同网址 | 国产精品久久久久aaaa | 日本亚洲国产 | 五月婷婷国产 | 久日视频 | 色播六月天 | 中文亚洲欧美日韩 | 精品久久久久国产免费第一页 | 成年人在线免费看视频 | 97国产在线 | 热久久国产精品 | 精品视频123区在线观看 | 欧美日韩综合在线 | 九九99 | 久久久亚洲网站 | 久久综合九色综合97婷婷女人 | av中文字幕网站 | 亚洲高清网站 | 成人免费网视频 | 日本中文字幕在线播放 | 91桃色在线免费观看 | 综合久久综合久久 | 久久久久久久久久电影 | 亚洲爱av| 91精品在线看 | 久久久久久久久影视 | 国产99久久九九精品免费 | 玖玖玖精品 | 中文字幕日本电影 | 国产精品一区在线播放 | 成人一区二区在线 | 久久免费av | 久操综合| 国产精品久久久av久久久 | 国产自产在线视频 | 久久久高清免费视频 | 午夜在线看| 国产福利精品视频 | 在线午夜电影神马影院 | 91福利试看| 日本黄色大片儿 | 久草视频免费在线播放 | 国产高清不卡一区二区三区 | 狂野欧美激情性xxxx欧美 | 亚洲最大在线视频 | 亚洲欧美国产日韩在线观看 | 亚洲精品视频播放 | 久久久久久国产一区二区三区 | 98久9在线 | 免费 | 黄色小说在线免费观看 | 亚洲激情六月 | 99精品小视频 | 黄色精品一区 | 在线观看黄色的网站 | 国产码电影 | 国产中年夫妇高潮精品视频 | 99精品欧美一区二区蜜桃免费 | 久久综合免费视频影院 | 国产色女| 免费特级黄色片 | 久久夜色精品国产亚洲aⅴ 91chinesexxx | 激情丁香综合五月 | 亚洲精品在线二区 | 亚洲精品美女视频 | 亚洲精品在线国产 | 狠狠躁夜夜躁人人爽超碰97香蕉 | 久久精品人人做人人综合老师 | 欧美性护士 | 黄色大全免费观看 | 99热只有精品在线观看 | 久久久久久不卡 | 夜夜躁日日躁狠狠久久av | 狠狠色狠狠色综合日日小说 | 久久这里只有精品视频99 | 一区电影| 天天色天天综合 | 人人爽人人乐 | 色综合天天综合网国产成人网 | 狠狠躁天天躁综合网 | aaa免费毛片 | 久久理论片 | 99热国产在线中文 | 中文字幕在线播放av | 久久人人艹 | av在线播放免费 | 国产精品对白一区二区三区 | 欧美aa在线观看 | 欧美有色 | 九九热精品视频在线播放 | 国产精品久久二区 | 国内精品久久久久影院男同志 | 久久精视频| 精品久久一区二区 |