日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

今日arXiv精选 | 46篇EMNLP 2021最新论文

發布時間:2024/10/8 编程问答 50 豆豆
生活随笔 收集整理的這篇文章主要介紹了 今日arXiv精选 | 46篇EMNLP 2021最新论文 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

?關于?#今日arXiv精選?

這是「AI 學術前沿」旗下的一檔欄目,編輯將每日從arXiv中精選高質量論文,推送給讀者。

Neural Machine Translation Quality and Post-Editing Performance

Comment: 9 pages, 1 page appendix. To be presented at EMNLP2021

Link:?http://arxiv.org/abs/2109.05016

Abstract

We test the natural expectation that using MT in professional translationsaves human processing time. The last such study was carried out bySanchez-Torron and Koehn (2016) with phrase-based MT, artificially reducing thetranslation quality. In contrast, we focus on neural MT (NMT) of high quality,which has become the state-of-the-art approach since then and also got adoptedby most translation companies. ?Through an experimental study involving over 30 professional translators forEnglish ->Czech translation, we examine the relationship between NMTperformance and post-editing time and quality. Across all models, we found thatbetter MT systems indeed lead to fewer changes in the sentences in thisindustry setting. The relation between system quality and post-editing time ishowever not straightforward and, contrary to the results on phrase-based MT,BLEU is definitely not a stable predictor of the time or final output quality.

BiSECT: Learning to Split and Rephrase Sentences with Bitexts

Comment: 9 pages, 9 figures. Long paper to appear in Empirical Methods in ?Natural Language Processing 2021 (EMNLP 2021)

Link:?http://arxiv.org/abs/2109.05006

Abstract

An important task in NLP applications such as sentence simplification is theability to take a long, complex sentence and split it into shorter sentences,rephrasing as necessary. We introduce a novel dataset and a new model for this`split and rephrase' task. Our BiSECT training data consists of 1 million longEnglish sentences paired with shorter, meaning-equivalent English sentences. Weobtain these by extracting 1-2 sentence alignments in bilingual parallelcorpora and then using machine translation to convert both sides of the corpusinto the same language. BiSECT contains higher quality training examples thanprevious Split and Rephrase corpora, with sentence splits that require moresignificant modifications. We categorize examples in our corpus, and use thesecategories in a novel model that allows us to target specific regions of theinput sentence to be split and edited. Moreover, we show that models trained onBiSECT can perform a wider variety of split operations and improve uponprevious state-of-the-art approaches in automatic and human evaluations.

Distantly-Supervised Named Entity Recognition with Noise-Robust Learning and Language Model Augmented Self-Training

Comment: EMNLP 2021. (Code: https://github.com/yumeng5/RoSTER)

Link:?http://arxiv.org/abs/2109.05003

Abstract

We study the problem of training named entity recognition (NER) models usingonly distantly-labeled data, which can be automatically obtained by matchingentity mentions in the raw text with entity types in a knowledge base. Thebiggest challenge of distantly-supervised NER is that the distant supervisionmay induce incomplete and noisy labels, rendering the straightforwardapplication of supervised learning ineffective. In this paper, we propose (1) anoise-robust learning scheme comprised of a new loss function and a noisy labelremoval step, for training NER models on distantly-labeled data, and (2) aself-training method that uses contextualized augmentations created bypre-trained language models to improve the generalization ability of the NERmodel. On three benchmark datasets, our method achieves superior performance,outperforming existing distantly-supervised NER models by significant margins.

Topic-Aware Contrastive Learning for Abstractive Dialogue Summarization

Comment: EMNLP 2021

Link:?http://arxiv.org/abs/2109.04994

Abstract

Unlike well-structured text, such as news reports and encyclopedia articles,dialogue content often comes from two or more interlocutors, exchanginginformation with each other. In such a scenario, the topic of a conversationcan vary upon progression and the key information for a certain topic is oftenscattered across multiple utterances of different speakers, which poseschallenges to abstractly summarize dialogues. To capture the various topicinformation of a conversation and outline salient facts for the capturedtopics, this work proposes two topic-aware contrastive learning objectives,namely coherence detection and sub-summary generation objectives, which areexpected to implicitly model the topic change and handle information scatteringchallenges for the dialogue summarization task. The proposed contrastiveobjectives are framed as auxiliary tasks for the primary dialogue summarizationtask, united via an alternative parameter updating strategy. Extensiveexperiments on benchmark datasets demonstrate that the proposed simple methodsignificantly outperforms strong baselines and achieves new state-of-the-artperformance. The code and trained models are publicly available via\href{https://github.com/Junpliu/ConDigSum}{https://github.com/Junpliu/ConDigSum}.

Does Pretraining for Summarization Require Knowledge Transfer?

Comment: Camera-ready for Findings of EMNLP 2021

Link:?http://arxiv.org/abs/2109.04953

Abstract

Pretraining techniques leveraging enormous datasets have driven recentadvances in text summarization. While folk explanations suggest that knowledgetransfer accounts for pretraining's benefits, little is known about why itworks or what makes a pretraining task or dataset suitable. In this paper, wechallenge the knowledge transfer story, showing that pretraining on documentsconsisting of character n-grams selected at random, we can nearly match theperformance of models pretrained on real corpora. This work holds the promiseof eliminating upstream corpora, which may alleviate some concerns overoffensive language, bias, and copyright issues. To see whether the smallresidual benefit of using real data could be accounted for by the structure ofthe pretraining task, we design several tasks motivated by a qualitative studyof summarization corpora. However, these tasks confer no appreciable benefit,leaving open the possibility of a small role for knowledge transfer.

Tiered Reasoning for Intuitive Physics: Toward Verifiable Commonsense Language Understanding

Comment: Accepted to Findings of EMNLP 2021

Link:?http://arxiv.org/abs/2109.04947

Abstract

Large-scale, pre-trained language models (LMs) have achieved human-levelperformance on a breadth of language understanding tasks. However, evaluationsonly based on end task performance shed little light on machines' true abilityin language understanding and reasoning. In this paper, we highlight theimportance of evaluating the underlying reasoning process in addition to endperformance. Toward this goal, we introduce Tiered Reasoning for IntuitivePhysics (TRIP), a novel commonsense reasoning dataset with dense annotationsthat enable multi-tiered evaluation of machines' reasoning process. Ourempirical results show that while large LMs can achieve high end performance,they struggle to support their predictions with valid supporting evidence. TheTRIP dataset and our baseline results will motivate verifiable evaluation ofcommonsense reasoning and facilitate future research toward developing betterlanguage understanding and reasoning models.

Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars

Comment: Accepted by EMNLP 2021

Link:?http://arxiv.org/abs/2109.04939

Abstract

In computational linguistics, it has been shown that hierarchical structuresmake language models (LMs) more human-like. However, the previous literaturehas been agnostic about a parsing strategy of the hierarchical models. In thispaper, we investigated whether hierarchical structures make LMs morehuman-like, and if so, which parsing strategy is most cognitively plausible. Inorder to address this question, we evaluated three LMs against human readingtimes in Japanese with head-final left-branching structures: Long Short-TermMemory (LSTM) as a sequential model and Recurrent Neural Network Grammars(RNNGs) with top-down and left-corner parsing strategies as hierarchicalmodels. Our computational modeling demonstrated that left-corner RNNGsoutperformed top-down RNNGs and LSTM, suggesting that hierarchical andleft-corner architectures are more cognitively plausible than top-down orsequential architectures. In addition, the relationships between the cognitiveplausibility and (i) perplexity, (ii) parsing, and (iii) beam size will also bediscussed.

Beyond the Tip of the Iceberg: Assessing Coherence of Text Classifiers

Comment: Accepted to Findings of EMNLP 2021

Link:?http://arxiv.org/abs/2109.04922

Abstract

As large-scale, pre-trained language models achieve human-level andsuperhuman accuracy on existing language understanding tasks, statistical biasin benchmark data and probing studies have recently called into question theirtrue capabilities. For a more informative evaluation than accuracy on textclassification tasks can offer, we propose evaluating systems through a novelmeasure of prediction coherence. We apply our framework to two existinglanguage understanding benchmarks with different properties to demonstrate itsversatility. Our experimental results show that this evaluation framework,although simple in ideas and implementation, is a quick, effective, andversatile measure to provide insight into the coherence of machines'predictions.

Examining Cross-lingual Contextual Embeddings with Orthogonal Structural Probes

Comment: EMNLP 2021 Main Conference

Link:?http://arxiv.org/abs/2109.04921

Abstract

State-of-the-art contextual embeddings are obtained from large languagemodels available only for a few languages. For others, we need to learnrepresentations using a multilingual model. There is an ongoing debate onwhether multilingual embeddings can be aligned in a space shared across manylanguages. The novel Orthogonal Structural Probe (Limisiewicz and Mare\v{c}ek,2021) allows us to answer this question for specific linguistic features andlearn a projection based only on mono-lingual annotated datasets. We evaluatesyntactic (UD) and lexical (WordNet) structural information encoded inmBERT'scontextual representations for nine diverse languages. We observe that forlanguages closely related to English, no transformation is needed. Theevaluated information is encoded in a shared cross-lingual embedding space. Forother languages, it is beneficial to apply orthogonal transformation learnedseparately for each language. We successfully apply our findings to zero-shotand few-shot cross-lingual parsing.

ReasonBERT: Pre-trained to Reason with Distant Supervision

Comment: Accepted to EMNLP'2021. Our code and pre-trained models are available ?at https://github.com/sunlab-osu/ReasonBERT

Link:?http://arxiv.org/abs/2109.04912

Abstract

We present ReasonBert, a pre-training method that augments language modelswith the ability to reason over long-range relations and multiple, possiblyhybrid contexts. Unlike existing pre-training methods that only harvestlearning signals from local contexts of naturally occurring texts, we propose ageneralized notion of distant supervision to automatically connect multiplepieces of text and tables to create pre-training examples that requirelong-range reasoning. Different types of reasoning are simulated, includingintersecting multiple pieces of evidence, bridging from one piece of evidenceto another, and detecting unanswerable cases. We conduct a comprehensiveevaluation on a variety of extractive question answering datasets ranging fromsingle-hop to multi-hop and from text-only to table-only to hybrid that requirevarious reasoning capabilities and show that ReasonBert achieves remarkableimprovement over an array of strong baselines. Few-shot experiments furtherdemonstrate that our pre-training method substantially improves sampleefficiency.

Document-level Entity-based Extraction as Template Generation

Comment: 13 pages. EMNLP 2021

Link:?http://arxiv.org/abs/2109.04901

Abstract

Document-level entity-based extraction (EE), aiming at extractingentity-centric information such as entity roles and entity relations, is key toautomatic knowledge acquisition from text corpora for various domains. Mostdocument-level EE systems build extractive models, which struggle to modellong-term dependencies among entities at the document level. To address thisissue, we propose a generative framework for two document-level EE tasks:role-filler entity extraction (REE) and relation extraction (RE). We firstformulate them as a template generation problem, allowing models to efficientlycapture cross-entity dependencies, exploit label semantics, and avoid theexponential computation complexity of identifying N-ary relations. A novelcross-attention guided copy mechanism, TopK Copy, is incorporated into apre-trained sequence-to-sequence model to enhance the capabilities ofidentifying key information in the input document. Experiments done on theMUC-4 and SciREX dataset show new state-of-the-art results on REE (+3.26%),binary RE (+4.8%), and 4-ary RE (+2.7%) in F1 score.

Efficient Test Time Adapter Ensembling for Low-resource Language Varieties

Comment: EMNLP 2021 Findings

Link:?http://arxiv.org/abs/2109.04877

Abstract

Adapters are light-weight modules that allow parameter-efficient fine-tuningof pretrained models. Specialized language and task adapters have recently beenproposed to facilitate cross-lingual transfer of multilingual pretrained models(Pfeiffer et al., 2020b). However, this approach requires training a separatelanguage adapter for every language one wishes to support, which can beimpractical for languages with limited data. An intuitive solution is to use arelated language adapter for the new language variety, but we observe that thissolution can lead to sub-optimal performance. In this paper, we aim to improvethe robustness of language adapters to uncovered languages without training newadapters. We find that ensembling multiple existing language adapters makes thefine-tuned model significantly more robust to other language varieties notincluded in these adapters. Building upon this observation, we propose EntropyMinimized Ensemble of Adapters (EMEA), a method that optimizes the ensembleweights of the pretrained language adapters for each test sentence byminimizing the entropy of its predictions. Experiments on three diverse groupsof language varieties show that our method leads to significant improvements onboth named entity recognition and part-of-speech tagging across all languages.

Studying word order through iterative shuffling

Comment: EMNLP 2021

Link:?http://arxiv.org/abs/2109.04867

Abstract

As neural language models approach human performance on NLP benchmark tasks,their advances are widely seen as evidence of an increasingly complexunderstanding of syntax. This view rests upon a hypothesis that has not yetbeen empirically tested: that word order encodes meaning essential toperforming these tasks. We refute this hypothesis in many cases: in the GLUEsuite and in various genres of English text, the words in a sentence or phrasecan rarely be permuted to form a phrase carrying substantially differentinformation. Our surprising result relies on inference by iterative shuffling(IBIS), a novel, efficient procedure that finds the ordering of a bag of wordshaving the highest likelihood under a fixed language model. IBIS can use anyblack-box model without additional training and is superior to existing wordordering algorithms. Coalescing our findings, we discuss how shufflinginference procedures such as IBIS can benefit language modeling and constrainedgeneration.

CoPHE: A Count-Preserving Hierarchical Evaluation Metric in Large-Scale Multi-Label Text Classification

Comment: 5 pages, 2 figures, EMNLP 2021

Link:?http://arxiv.org/abs/2109.04853

Abstract

Large-Scale Multi-Label Text Classification (LMTC) includes tasks withhierarchical label spaces, such as automatic assignment of ICD-9 codes todischarge summaries. Performance of models in prior art is evaluated withstandard precision, recall, and F1 measures without regard for the richhierarchical structure. In this work we argue for hierarchical evaluation ofthe predictions of neural LMTC models. With the example of the ICD-9 ontologywe describe a structural issue in the representation of the structured labelspace in prior art, and propose an alternative representation based on thedepth of the ontology. We propose a set of metrics for hierarchical evaluationusing the depth-based representation. We compare the evaluation scores from theproposed metrics with previously used metrics on prior art LMTC models forICD-9 coding in MIMIC-III. We also propose further avenues of researchinvolving the proposed ontological representation.

Block Pruning For Faster Transformers

Comment: EMNLP 2021. Code, hyper-parameters, evaluation results and ?checkpoints available at https://github.com/huggingface/nn_pruning

Link:?http://arxiv.org/abs/2109.04838

Abstract

Pre-training has improved model accuracy for both classification andgeneration tasks at the cost of introducing much larger and slower models.Pruning methods have proven to be an effective way of reducing model size,whereas distillation methods are proven for speeding up inference. We introducea block pruning approach targeting both small and fast models. Our approachextends structured methods by considering blocks of any size and integratesthis structure into the movement pruning paradigm for fine-tuning. We find thatthis approach learns to prune out full components of the underlying model, suchas attention heads. Experiments consider classification and generation tasks,yielding among other results a pruned model that is a 2.4x faster, 74% smallerBERT on SQuAD v1, with a 1% drop on F1, competitive both with distilled modelsin speed and pruned models in size.

An Evaluation Dataset and Strategy for Building Robust Multi-turn Response Selection Model

Comment: EMNLP 2021

Link:?http://arxiv.org/abs/2109.04834

Abstract

Multi-turn response selection models have recently shown comparableperformance to humans in several benchmark datasets. However, in the realenvironment, these models often have weaknesses, such as making incorrectpredictions based heavily on superficial patterns without a comprehensiveunderstanding of the context. For example, these models often give a high scoreto the wrong response candidate containing several keywords related to thecontext but using the inconsistent tense. In this study, we analyze theweaknesses of the open-domain Korean Multi-turn response selection models andpublish an adversarial dataset to evaluate these weaknesses. We also suggest astrategy to build a robust model in this adversarial environment.

Asking It All: Generating Contextualized Questions for any Semantic Role

Comment: Accepted as a long paper to EMNLP 2021, Main Conference

Link:?http://arxiv.org/abs/2109.04832

Abstract

Asking questions about a situation is an inherent step towards understandingit. To this end, we introduce the task of role question generation, which,given a predicate mention and a passage, requires producing a set of questionsasking about all possible semantic roles of the predicate. We develop atwo-stage model for this task, which first produces a context-independentquestion prototype for each role and then revises it to be contextuallyappropriate for the passage. Unlike most existing approaches to questiongeneration, our approach does not require conditioning on existing answers inthe text. Instead, we condition on the type of information to inquire about,regardless of whether the answer appears explicitly in the text, could beinferred from it, or should be sought elsewhere. Our evaluation demonstratesthat we generate diverse and well-formed questions for a large, broad-coverageontology of predicates and roles.

Artificial Text Detection via Examining the Topology of Attention Maps

Comment: Accepted to EMNLP 2021

Link:?http://arxiv.org/abs/2109.04825

Abstract

The impressive capabilities of recent generative models to create texts thatare challenging to distinguish from the human-written ones can be misused forgenerating fake news, product reviews, and even abusive content. Despite theprominent performance of existing methods for artificial text detection, theystill lack interpretability and robustness towards unseen models. To this end,we propose three novel types of interpretable topological features for thistask based on Topological Data Analysis (TDA) which is currently understudiedin the field of NLP. We empirically show that the features derived from theBERT model outperform count- and neural-based baselines up to 10\% on threecommon datasets, and tend to be the most robust towards unseen GPT-stylegeneration models as opposed to existing methods. The probing analysis of thefeatures reveals their sensitivity to the surface and syntactic properties. Theresults demonstrate that TDA is a promising line with respect to NLP tasks,specifically the ones that incorporate surface and structural information.

Does It Capture STEL? A Modular, Similarity-based Linguistic Style Evaluation Framework

Comment: Accepted at EMNLP2021

Link:?http://arxiv.org/abs/2109.04817

Abstract

Style is an integral part of natural language. However, evaluation methodsfor style measures are rare, often task-specific and usually do not control forcontent. We propose the modular, fine-grained and content-controlledsimilarity-based STyle EvaLuation framework (STEL) to test the performance ofany model that can compare two sentences on style. We illustrate STEL with twogeneral dimensions of style (formal/informal and simple/complex) as well as twospecific characteristics of style (contrac'tion and numb3r substitution). Wefind that BERT-based methods outperform simple versions of commonly used stylemeasures like 3-grams, punctuation frequency and LIWC-based approaches. Weinvite the addition of further tasks and task instances to STEL and hope tofacilitate the improvement of style-sensitive measures.

Mixture-of-Partitions: Infusing Large Biomedical Knowledge Graphs into BERT

Comment: EMNLP 2021 camera-ready version

Link:?http://arxiv.org/abs/2109.04810

Abstract

Infusing factual knowledge into pre-trained models is fundamental for manyknowledge-intensive tasks. In this paper, we proposed Mixture-of-Partitions(MoP), an infusion approach that can handle a very large knowledge graph (KG)by partitioning it into smaller sub-graphs and infusing their specificknowledge into various BERT models using lightweight adapters. To leverage theoverall factual knowledge for a target task, these sub-graph adapters arefurther fine-tuned along with the underlying BERT through a mixture layer. Weevaluate our MoP with three biomedical BERTs (SciBERT, BioBERT, PubmedBERT) onsix downstream tasks (inc. NLI, QA, Classification), and the results show thatour MoP consistently enhances the underlying BERTs in task performance, andachieves new SOTA performances on five evaluated datasets.

Exophoric Pronoun Resolution in Dialogues with Topic Regularization

Comment: EMNLP 2021 main conference

Link:?http://arxiv.org/abs/2109.04787

Abstract

Resolving pronouns to their referents has long been studied as a fundamentalnatural language understanding problem. Previous works on pronoun coreferenceresolution (PCR) mostly focus on resolving pronouns to mentions in text whileignoring the exophoric scenario. Exophoric pronouns are common in dailycommunications, where speakers may directly use pronouns to refer to someobjects present in the environment without introducing the objects first.Although such objects are not mentioned in the dialogue text, they can often bedisambiguated by the general topics of the dialogue. Motivated by this, wepropose to jointly leverage the local context and global topics of dialogues tosolve the out-of-text PCR problem. Extensive experiments demonstrate theeffectiveness of adding topic regularization for resolving exophoric pronouns.

RoR: Read-over-Read for Long Document Machine Reading Comprehension

Comment: Accepted as findings of EMNLP2021

Link:?http://arxiv.org/abs/2109.04780

Abstract

Transformer-based pre-trained models, such as BERT, have achieved remarkableresults on machine reading comprehension. However, due to the constraint ofencoding length (e.g., 512 WordPiece tokens), a long document is usually splitinto multiple chunks that are independently read. It results in the readingfield being limited to individual chunks without information collaboration forlong document machine reading comprehension. To address this problem, wepropose RoR, a read-over-read method, which expands the reading field fromchunk to document. Specifically, RoR includes a chunk reader and a documentreader. The former first predicts a set of regional answers for each chunk,which are then compacted into a highly-condensed version of the originaldocument, guaranteeing to be encoded once. The latter further predicts theglobal answers from this condensed document. Eventually, a voting strategy isutilized to aggregate and rerank the regional and global answers for finalprediction. Extensive experiments on two benchmarks QuAC and TriviaQAdemonstrate the effectiveness of RoR for long document reading. Notably, RoRranks 1st place on the QuAC leaderboard (https://quac.ai/) at the time ofsubmission (May 17th, 2021).

Improving Multilingual Translation by Representation and Gradient Regularization

Comment: EMNLP 2021 (Long)

Link:?http://arxiv.org/abs/2109.04778

Abstract

Multilingual Neural Machine Translation (NMT) enables one model to serve alltranslation directions, including ones that are unseen during training, i.e.zero-shot translation. Despite being theoretically attractive, current modelsoften produce low quality translations -- commonly failing to even produceoutputs in the right target language. In this work, we observe that off-targettranslation is dominant even in strong multilingual systems, trained on massivemultilingual corpora. To address this issue, we propose a joint approach toregularize NMT models at both representation-level and gradient-level. At therepresentation level, we leverage an auxiliary target language prediction taskto regularize decoder outputs to retain information about the target language.At the gradient level, we leverage a small amount of direct data (in thousandsof sentence pairs) to regularize model gradients. Our results demonstrate thatour approach is highly effective in both reducing off-target translationoccurrences and improving zero-shot translation performance by +5.59 and +10.38BLEU on WMT and OPUS datasets respectively. Moreover, experiments show that ourmethod also works well when the small amount of direct data is not available.

A Strong Baseline for Query Efficient Attacks in a Black Box Setting

Comment: EMNLP 2021 - Main Conference

Link:?http://arxiv.org/abs/2109.04775

Abstract

Existing black box search methods have achieved high success rate ingenerating adversarial attacks against NLP models. However, such search methodsare inefficient as they do not consider the amount of queries required togenerate adversarial attacks. Also, prior attacks do not maintain a consistentsearch space while comparing different search methods. In this paper, wepropose a query efficient attack strategy to generate plausible adversarialexamples on text classification and entailment tasks. Our attack jointlyleverages attention mechanism and locality sensitive hashing (LSH) to reducethe query count. We demonstrate the efficacy of our approach by comparing ourattack with four baselines across three different search spaces. Further, webenchmark our results across the same search space used in prior attacks. Incomparison to attacks proposed, on an average, we are able to reduce the querycount by 75% across all datasets and target models. We also demonstrate thatour attack achieves a higher success rate when compared to prior attacks in alimited query setting.

How Does Fine-tuning Affect the Geometry of Embedding Space: A Case Study on Isotropy

Comment: To appear in Findings of EMNLP 2021

Link:?http://arxiv.org/abs/2109.04740

Abstract

It is widely accepted that fine-tuning pre-trained language models usuallybrings about performance improvements in downstream tasks. However, there arelimited studies on the reasons behind this effectiveness, particularly from theviewpoint of structural changes in the embedding space. Trying to fill thisgap, in this paper, we analyze the extent to which the isotropy of theembedding space changes after fine-tuning. We demonstrate that, even thoughisotropy is a desirable geometrical property, fine-tuning does not necessarilyresult in isotropy enhancements. Moreover, local structures in pre-trainedcontextual word representations (CWRs), such as those encoding token types orfrequency, undergo a massive change during fine-tuning. Our experiments showdramatic growth in the number of elongated directions in the embedding space,which, in contrast to pre-trained CWRs, carry the essential linguisticknowledge in the fine-tuned embedding space, making existing isotropyenhancement methods ineffective.

Genre as Weak Supervision for Cross-lingual Dependency Parsing

Comment: Accepted to EMNLP 2021 (Main Conference)

Link:?http://arxiv.org/abs/2109.04733

Abstract

Recent work has shown that monolingual masked language models learn torepresent data-driven notions of language variation which can be used fordomain-targeted training data selection. Dataset genre labels are alreadyfrequently available, yet remain largely unexplored in cross-lingual setups. Weharness this genre metadata as a weak supervision signal for targeted dataselection in zero-shot dependency parsing. Specifically, we projecttreebank-level genre information to the finer-grained sentence level, with thegoal to amplify information implicitly stored in unsupervised contextualizedrepresentations. We demonstrate that genre is recoverable from multilingualcontextual embeddings and that it provides an effective signal for trainingdata selection in cross-lingual, zero-shot scenarios. For 12 low-resourcelanguage treebanks, six of which are test-only, our genre-specific methodssignificantly outperform competitive baselines as well as recentembedding-based methods for data selection. Moreover, genre-based dataselection provides new state-of-the-art results for three of these targetlanguages.

Assessing the Reliability of Word Embedding Gender Bias Measures

Comment: 23 pages, 24 figures, 3 tables. Accepted to EMNLP 2021

Link:?http://arxiv.org/abs/2109.04732

Abstract

Various measures have been proposed to quantify human-like social biases inword embeddings. However, bias scores based on these measures can suffer frommeasurement error. One indication of measurement quality is reliability,concerning the extent to which a measure produces consistent results. In thispaper, we assess three types of reliability of word embedding gender biasmeasures, namely test-retest reliability, inter-rater consistency and internalconsistency. Specifically, we investigate the consistency of bias scores acrossdifferent choices of random seeds, scoring rules and words. Furthermore, weanalyse the effects of various factors on these measures' reliability scores.Our findings inform better design of word embedding gender bias measures.Moreover, we urge researchers to be more critical about the application of suchmeasures.

AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages

Comment: EMNLP 2021

Link:?http://arxiv.org/abs/2109.04715

Abstract

Reproducible benchmarks are crucial in driving progress of machinetranslation research. However, existing machine translation benchmarks havebeen mostly limited to high-resource or well-represented languages. Despite anincreasing interest in low-resource machine translation, there are nostandardized reproducible benchmarks for many African languages, many of whichare used by millions of speakers but have less digitized textual data. Totackle these challenges, we propose AfroMT, a standardized, clean, andreproducible machine translation benchmark for eight widely spoken Africanlanguages. We also develop a suite of analysis tools for system diagnosistaking into account the unique properties of these languages. Furthermore, weexplore the newly considered case of low-resource focused pretraining anddevelop two novel data augmentation-based strategies, leveraging word-levelalignment information and pseudo-monolingual data for pretraining multilingualsequence-to-sequence models. We demonstrate significant improvements whenpretraining on 11 languages, with gains of up to 2 BLEU points over strongbaselines. We also show gains of up to 12 BLEU points over cross-lingualtransfer baselines in data-constrained scenarios. All code and pretrainedmodels will be released as further steps towards larger reproducible benchmarksfor African languages.

Balancing Methods for Multi-label Text Classification with Long-Tailed Class Distribution

Comment: EMNLP 2021

Link:?http://arxiv.org/abs/2109.04712

Abstract

Multi-label text classification is a challenging task because it requirescapturing label dependencies. It becomes even more challenging when classdistribution is long-tailed. Resampling and re-weighting are common approachesused for addressing the class imbalance problem, however, they are noteffective when there is label dependency besides class imbalance because theyresult in oversampling of common labels. Here, we introduce the application ofbalancing loss functions for multi-label text classification. We performexperiments on a general domain dataset with 90 labels (Reuters-21578) and adomain-specific dataset from PubMed with 18211 labels. We find that adistribution-balanced loss function, which inherently addresses both the classimbalance and label linkage problems, outperforms commonly used loss functions.Distribution balancing methods have been successfully used in the imagerecognition field. Here, we show their effectiveness in natural languageprocessing. Source code is available athttps://github.com/blessu/BalancedLossNLP.

Pre-train or Annotate? Domain Adaptation with a Constrained Budget

Comment: Accepted to EMNLP 2021

Link:?http://arxiv.org/abs/2109.04711

Abstract

Recent work has demonstrated that pre-training in-domain language models canboost performance when adapting to a new domain. However, the costs associatedwith pre-training raise an important question: given a fixed budget, what stepsshould an NLP practitioner take to maximize performance? In this paper, westudy domain adaptation under budget constraints, and approach it as a customerchoice problem between data annotation and pre-training. Specifically, wemeasure the annotation cost of three procedural text datasets and thepre-training cost of three in-domain language models. Then we evaluate theutility of different combinations of pre-training and data annotation undervarying budget constraints to assess which combination strategy works best. Wefind that, for small budgets, spending all funds on annotation leads to thebest performance; once the budget becomes large enough, a combination of dataannotation and in-domain pre-training works more optimally. We thereforesuggest that task-specific data annotation should be part of an economicalstrategy when adapting an NLP model to a new domain.

Knowledge-Aware Meta-learning for Low-Resource Text Classification

Comment: Accepted by EMNLP 2021

Link:?http://arxiv.org/abs/2109.04707

Abstract

Meta-learning has achieved great success in leveraging the historical learnedknowledge to facilitate the learning process of the new task. However, merelylearning the knowledge from the historical tasks, adopted by currentmeta-learning algorithms, may not generalize well to testing tasks when theyare not well-supported by training tasks. This paper studies a low-resourcetext classification problem and bridges the gap between meta-training andmeta-testing tasks by leveraging the external knowledge bases. Specifically, wepropose KGML to introduce additional representation for each sentence learnedfrom the extracted sentence-specific knowledge graph. The extensive experimentson three datasets demonstrate the effectiveness of KGML under both supervisedadaptation and unsupervised adaptation settings.

Rethinking Zero-shot Neural Machine Translation: From a Perspective of Latent Variables

Comment: EMNLP Findings 2021

Link:?http://arxiv.org/abs/2109.04705

Abstract

Zero-shot translation, directly translating between language pairs unseen intraining, is a promising capability of multilingual neural machine translation(NMT). However, it usually suffers from capturing spurious correlations betweenthe output language and language invariant semantics due to the maximumlikelihood training objective, leading to poor transfer performance onzero-shot translation. In this paper, we introduce a denoising autoencoderobjective based on pivot language into traditional training objective toimprove the translation accuracy on zero-shot directions. The theoreticalanalysis from the perspective of latent variables shows that our approachactually implicitly maximizes the probability distributions for zero-shotdirections. On two benchmark machine translation datasets, we demonstrate thatthe proposed method is able to effectively eliminate the spurious correlationsand significantly outperforms state-of-the-art methods with a remarkableperformance. Our code is available at https://github.com/Victorwz/zs-nmt-dae.

Heterogeneous Graph Neural Networks for Keyphrase Generation

Comment: Accepted by EMNLP 2021

Link:?http://arxiv.org/abs/2109.04703

Abstract

The encoder-decoder framework achieves state-of-the-art results in keyphrasegeneration (KG) tasks by predicting both present keyphrases that appear in thesource document and absent keyphrases that do not. However, relying solely onthe source document can result in generating uncontrollable and inaccurateabsent keyphrases. To address these problems, we propose a novel graph-basedmethod that can capture explicit knowledge from related references. Our modelfirst retrieves some document-keyphrases pairs similar to the source documentfrom a pre-defined index as references. Then a heterogeneous graph isconstructed to capture relationships of different granularities between thesource document and its references. To guide the decoding process, ahierarchical attention and copy mechanism is introduced, which directly copiesappropriate words from both the source document and its references based ontheir relevance and significance. The experimental results on multiple KGbenchmarks show that the proposed model achieves significant improvementsagainst other baseline models, especially with regard to the absent keyphraseprediction.

Generating Self-Contained and Summary-Centric Question Answer Pairs via Differentiable Reward Imitation Learning

Comment: To appear in Proceedings of EMNLP 2021

Link:?http://arxiv.org/abs/2109.04689

Abstract

Motivated by suggested question generation in conversational newsrecommendation systems, we propose a model for generating question-answer pairs(QA pairs) with self-contained, summary-centric questions andlength-constrained, article-summarizing answers. We begin by collecting a newdataset of news articles with questions as titles and pairing them withsummaries of varying length. This dataset is used to learn a QA pair generationmodel producing summaries as answers that balance brevity with sufficiencyjointly with their corresponding questions. We then reinforce the QA pairgeneration process with a differentiable reward function to mitigate exposurebias, a common problem in natural language generation. Both automatic metricsand human evaluation demonstrate these QA pairs successfully capture thecentral gists of the articles and achieve high answer accuracy.

DIALKI: Knowledge Identification in Conversational Systems through Dialogue-Document Contextualization

Comment: EMNLP 2021 camera-ready

Link:?http://arxiv.org/abs/2109.04673

Abstract

Identifying relevant knowledge to be used in conversational systems that aregrounded in long documents is critical to effective response generation. Weintroduce a knowledge identification model that leverages the documentstructure to provide dialogue-contextualized passage encodings and betterlocate knowledge relevant to the conversation. An auxiliary loss captures thehistory of dialogue-document connections. We demonstrate the effectiveness ofour model on two document-grounded conversational datasets and provide analysesshowing generalization to unseen documents and long dialogue contexts.

Investigating Numeracy Learning Ability of a Text-to-Text Transfer Model

Comment: 7 pages, 10 figures, 5 tables, Accepted in the Findings of EMNLP 2021

Link:?http://arxiv.org/abs/2109.04672

Abstract

The transformer-based pre-trained language models have been tremendouslysuccessful in most of the conventional NLP tasks. But they often struggle inthose tasks where numerical understanding is required. Some possible reasonscan be the tokenizers and pre-training objectives which are not specificallydesigned to learn and preserve numeracy. Here we investigate the ability oftext-to-text transfer learning model (T5), which has outperformed itspredecessors in the conventional NLP tasks, to learn numeracy. We consider fournumeracy tasks: numeration, magnitude order prediction, finding minimum andmaximum in a series, and sorting. We find that, although T5 models performreasonably well in the interpolation setting, they struggle considerably in theextrapolation setting across all four tasks.

Zero-Shot Dialogue State Tracking via Cross-Task Transfer

Comment: EMNLP 2021

Link:?http://arxiv.org/abs/2109.04655

Abstract

Zero-shot transfer learning for dialogue state tracking (DST) enables us tohandle a variety of task-oriented dialogue domains without the expense ofcollecting in-domain data. In this work, we propose to transfer the\textit{cross-task} knowledge from general question answering (QA) corpora forthe zero-shot DST task. Specifically, we propose TransferQA, a transferablegenerative QA model that seamlessly combines extractive QA and multi-choice QAvia a text-to-text transformer framework, and tracks both categorical slots andnon-categorical slots in DST. In addition, we introduce two effective ways toconstruct unanswerable questions, namely, negative question sampling andcontext truncation, which enable our model to handle "none" value slots in thezero-shot DST setting. The extensive experiments show that our approachessubstantially improve the existing zero-shot and few-shot results on MultiWoz.Moreover, compared to the fully trained baseline on the Schema-Guided Dialoguedataset, our approach shows better generalization ability in unseen domains.

Towards Developing a Multilingual and Code-Mixed Visual Question Answering System by Knowledge Distillation

Comment: Accepted in EMNLP-Findings (2021)

Link:?http://arxiv.org/abs/2109.04653

Abstract

Pre-trained language-vision models have shown remarkable performance on thevisual question answering (VQA) task. However, most pre-trained models aretrained by only considering monolingual learning, especially the resource-richlanguage like English. Training such models for multilingual setups demand highcomputing resources and multilingual language-vision dataset which hinderstheir application in practice. To alleviate these challenges, we propose aknowledge distillation approach to extend an English language-vision model(teacher) into an equally effective multilingual and code-mixed model(student). Unlike the existing knowledge distillation methods, which only usethe output from the last layer of the teacher network for distillation, ourstudent model learns and imitates the teacher from multiple intermediate layers(language and vision encoders) with appropriately designed distillationobjectives for incremental knowledge extraction. We also create the large-scalemultilingual and code-mixed VQA dataset in eleven different language setupsconsidering the multiple Indian and European languages. Experimental resultsand in-depth analysis show the effectiveness of the proposed VQA model over thepre-trained language-vision models on eleven diverse language setups.

What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers

Comment: Accepted to EMNLP2021 as a long paper

Link:?http://arxiv.org/abs/2109.04650

Abstract

GPT-3 shows remarkable in-context learning ability of large-scale languagemodels (LMs) trained on hundreds of billion scale data. Here we address someremaining issues less reported by the GPT-3 paper, such as a non-English LM,the performances of different sized models, and the effect of recentlyintroduced prompt optimization on in-context learning. To achieve this, weintroduce HyperCLOVA, a Korean variant of 82B GPT-3 trained on a Korean-centriccorpus of 560B tokens. Enhanced by our Korean-specific tokenization, HyperCLOVAwith our training configuration shows state-of-the-art in-context zero-shot andfew-shot learning performances on various downstream tasks in Korean. Also, weshow the performance benefits of prompt-based learning and demonstrate how itcan be integrated into the prompt engineering pipeline. Then we discuss thepossibility of materializing the No Code AI paradigm by providing AIprototyping capabilities to non-experts of ML by introducing HyperCLOVA studio,an interactive prompt engineering interface. Lastly, we demonstrate thepotential of our methods with three successful in-house applications.

Rule-based Morphological Inflection Improves Neural Terminology Translation

Comment: EMNLP 2021

Link:?http://arxiv.org/abs/2109.04620

Abstract

Current approaches to incorporating terminology constraints in machinetranslation (MT) typically assume that the constraint terms are provided intheir correct morphological forms. This limits their application to real-worldscenarios where constraint terms are provided as lemmas. In this paper, weintroduce a modular framework for incorporating lemma constraints in neural MT(NMT) in which linguistic knowledge and diverse types of NMT models can beflexibly applied. It is based on a novel cross-lingual inflection module thatinflects the target lemma constraints based on the source context. We explorelinguistically motivated rule-based and data-driven neural-based inflectionmodules and design English-German health and English-Lithuanian news testsuites to evaluate them in domain adaptation and low-resource MT settings.Results show that our rule-based inflection module helps NMT models incorporatelemma constraints more accurately than a neural module and outperforms theexisting end-to-end approach with lower training costs.

An Exploratory Study on Long Dialogue Summarization: What Works and What's Next

Comment: Findings of EMNLP 2021

Link:?http://arxiv.org/abs/2109.04609

Abstract

Dialogue summarization helps readers capture salient information from longconversations in meetings, interviews, and TV series. However, real-worlddialogues pose a great challenge to current summarization models, as thedialogue length typically exceeds the input limits imposed by recenttransformer-based pre-trained models, and the interactive nature of dialoguesmakes relevant information more context-dependent and sparsely distributed thannews articles. In this work, we perform a comprehensive study on long dialoguesummarization by investigating three strategies to deal with the lengthy inputproblem and locate relevant information: (1) extended transformer models suchas Longformer, (2) retrieve-then-summarize pipeline models with severaldialogue utterance retrieval methods, and (3) hierarchical dialogue encodingmodels such as HMNet. Our experimental results on three long dialogue datasets(QMSum, MediaSum, SummScreen) show that the retrieve-then-summarize pipelinemodels yield the best performance. We also demonstrate that the summary qualitycan be further improved with a stronger retrieval model and pretraining onproper external summarization datasets.

IndoBERTweet: A Pretrained Language Model for Indonesian Twitter with Effective Domain-Specific Vocabulary Initialization

Comment: Accepted at EMNLP 2021

Link:?http://arxiv.org/abs/2109.04607

Abstract

We present IndoBERTweet, the first large-scale pretrained model forIndonesian Twitter that is trained by extending a monolingually-trainedIndonesian BERT model with additive domain-specific vocabulary. We focus inparticular on efficient model adaptation under vocabulary mismatch, andbenchmark different ways of initializing the BERT embedding layer for new wordtypes. We find that initializing with the average BERT subword embedding makespretraining five times faster, and is more effective than proposed methods forvocabulary adaptation in terms of extrinsic evaluation over seven Twitter-baseddatasets.

Augmenting BERT-style Models with Predictive Coding to Improve Discourse-level Representations

Comment: Accepted paper EMNLP2021

Link:?http://arxiv.org/abs/2109.04602

Abstract

Current language models are usually trained using a self-supervised scheme,where the main focus is learning representations at the word or sentence level.However, there has been limited progress in generating useful discourse-levelrepresentations. In this work, we propose to use ideas from predictive codingtheory to augment BERT-style language models with a mechanism that allows themto learn suitable discourse-level representations. As a result, our proposedapproach is able to predict future sentences using explicit top-downconnections that operate at the intermediate layers of the network. Byexperimenting with benchmarks designed to evaluate discourse-related knowledgeusing pre-trained sentence representations, we demonstrate that our approachimproves performance in 6 out of 11 tasks by excelling in discourserelationship detection.

Cross-lingual Transfer for Text Classification with Dictionary-based Heterogeneous Graph

Comment: Published in Findings of EMNLP 2021

Link:?http://arxiv.org/abs/2109.04400

Abstract

In cross-lingual text classification, it is required that task-specifictraining data in high-resource source languages are available, where the taskis identical to that of a low-resource target language. However, collectingsuch training data can be infeasible because of the labeling cost, taskcharacteristics, and privacy concerns. This paper proposes an alternativesolution that uses only task-independent word embeddings of high-resourcelanguages and bilingual dictionaries. First, we construct a dictionary-basedheterogeneous graph (DHG) from bilingual dictionaries. This opens thepossibility to use graph neural networks for cross-lingual transfer. Theremaining challenge is the heterogeneity of DHG because multiple languages areconsidered. To address this challenge, we propose dictionary-basedheterogeneous graph neural network (DHGNet) that effectively handles theheterogeneity of DHG by two-step aggregations, which are word-level andlanguage-level aggregations. Experimental results demonstrate that our methodoutperforms pretrained models even though it does not access to large corpora.Furthermore, it can perform well even though dictionaries contain manyincorrect translations. Its robustness allows the usage of a wider range ofdictionaries such as an automatically constructed dictionary and crowdsourceddictionary, which are convenient for real-world applications.

Counterfactual Adversarial Learning with Representation Interpolation

Comment: Accepted to Findings of EMNLP 2021

Link:?http://arxiv.org/abs/2109.04746

Abstract

Deep learning models exhibit a preference for statistical fitting overlogical reasoning. Spurious correlations might be memorized when there existsstatistical bias in training data, which severely limits the model performanceespecially in small data scenarios. In this work, we introduce CounterfactualAdversarial Training framework (CAT) to tackle the problem from a causalityperspective. Particularly, for a specific sample, CAT first generates acounterfactual representation through latent space interpolation in anadversarial manner, and then performs Counterfactual Risk Minimization (CRM) oneach original-counterfactual pair to adjust sample-wise loss weightdynamically, which encourages the model to explore the true causal effect.Extensive experiments demonstrate that CAT achieves substantial performanceimprovement over SOTA across different downstream tasks, including sentenceclassification, natural language inference and question answering.

Style Pooling: Automatic Text Style Obfuscation for Improved Classification Fairness

Comment: EMNLP 2021

Link:?http://arxiv.org/abs/2109.04624

Abstract

Text style can reveal sensitive attributes of the author (e.g. race or age)to the reader, which can, in turn, lead to privacy violations and bias in bothhuman and algorithmic decisions based on text. For example, the style ofwriting in job applications might reveal protected attributes of the candidatewhich could lead to bias in hiring decisions, regardless of whether hiringdecisions are made algorithmically or by humans. We propose a VAE-basedframework that obfuscates stylistic features of human-generated text throughstyle transfer by automatically re-writing the text itself. Our frameworkoperationalizes the notion of obfuscated style in a flexible way that enablestwo distinct notions of obfuscated style: (1) a minimal notion that effectivelyintersects the various styles seen in training, and (2) a maximal notion thatseeks to obfuscate by adding stylistic features of all sensitive attributes totext, in effect, computing a union of styles. Our style-obfuscation frameworkcan be used for multiple purposes, however, we demonstrate its effectiveness inimproving the fairness of downstream classifiers. We also conduct acomprehensive study on style pooling's effect on fluency, semantic consistency,and attribute removal from text, in two and three domain style obfuscation.

·

總結

以上是生活随笔為你收集整理的今日arXiv精选 | 46篇EMNLP 2021最新论文的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

在线观看国产一区二区 | 久草免费在线观看视频 | 中文字幕在线观看av | 亚州黄色一级 | 免费成人av在线 | 婷婷性综合 | 91看片淫黄大片在线播放 | 国产 日韩 在线 亚洲 字幕 中文 | 国内精品视频一区二区三区八戒 | 免费视频一区二区 | 欧美一区二区伦理片 | 国产精品国产三级国产aⅴ无密码 | 色婷婷狠| 中文字幕第一页在线视频 | 日本精品二区 | 亚洲免费观看在线视频 | 久久综合毛片 | 97成人免费视频 | 欧美日本日韩aⅴ在线视频 插插插色综合 | 国内精品久久久久久久影视麻豆 | 色多多在线观看 | 欧美日韩免费观看一区=区三区 | 91免费在线 | 中文字幕在线视频一区 | 国产精久久久 | 午夜电影av | 国产一区二区在线观看视频 | 91日韩精品 | 国产精品99久久久久久人免费 | caobi视频 | 欧美在线不卡一区 | 中文字幕精品三级久久久 | 欧美日韩国产精品一区 | 日韩亚洲欧美中文字幕 | 色久网 | 国产精美视频 | 亚洲 在线 | 综合色天天 | 9i看片成人免费看片 | 亚洲综合视频在线 | 免费av看片 | 激情久久一区二区三区 | 午夜精品一二三区 | 国产精品免费观看网站 | 日日日爽爽爽 | 成人啪啪18免费游戏链接 | 亚洲国产日韩一区 | 日韩中文字幕第一页 | 天天色天天综合网 | 欧美巨乳波霸 | 91九色porny蝌蚪视频 | 91在线播放综合 | 日韩在线中文字幕 | 日韩特黄av | 在线观看麻豆av | 又黄又刺激 | 欧美日韩一区二区三区不卡 | 久久国产a | 午夜精品av在线 | 最新av免费在线观看 | 在线观看亚洲成人 | 在线观看v片 | 99精品免费久久久久久久久日本 | 人人艹视频 | www免费在线观看 | 91麻豆精品国产午夜天堂 | 天天玩夜夜操 | 一级成人在线 | 超级碰碰碰免费视频 | 国产免费观看久久黄 | 天天拍天天操 | 成人毛片a | 久草在线视频首页 | 国产黄色网 | 中文有码在线 | 四虎在线免费观看视频 | 久久免费在线观看视频 | 人人人爽 | 国产视频在线观看一区 | 国产成人三级一区二区在线观看一 | 亚洲精品小视频在线观看 | 91在线视频导航 | 免费日韩三级 | av在线看片| 国产精品自在线拍国产 | 久久99精品热在线观看 | 麻豆成人在线观看 | 成人免费视频免费观看 | 日本xxxx裸体xxxx17 | 中文字幕一区二区三 | 99久久久久免费精品国产 | 欧美最新另类人妖 | 亚洲免费色 | 激情综合啪啪 | av综合av | 最近最新中文字幕视频 | 国产91精品在线观看 | 插综合网 | 色综合久久五月天 | 欧美一级大片在线观看 | 在线看片日韩 | 又色又爽又黄高潮的免费视频 | 国产 在线 高清 精品 | 亚洲一区二区三区91 | 久草在线免费资源 | 国产自在线 | 狠狠亚洲 | 国产中文字幕免费 | 伊人婷婷色 | 亚在线播放中文视频 | 久久精品视频在线观看免费 | 国产一级视屏 | 久久久免费精品视频 | 香蕉视频在线观看免费 | 天天干天天干天天射 | 免费视频成人 | 国产中文字幕亚洲 | 久久午夜免费视频 | 亚洲闷骚少妇在线观看网站 | 丝袜美女在线观看 | 999一区二区三区 | 日韩色在线 | 2023亚洲精品国偷拍自产在线 | 亚州中文av | 亚洲一区免费在线 | 综合色影院 | 欧美一级黄色视屏 | 亚洲欧美视频一区二区三区 | 天堂在线视频中文网 | 97视频总站 | 色五月成人| 天天干天天干天天色 | 日韩大片免费在线观看 | 婷婷色网址 | 免费日韩一区 | 国产精品入口麻豆 | 国产免费精彩视频 | 日日爽夜夜操 | 国产婷婷 | 国产99久久九九精品免费 | 国产区精品在线观看 | 精品国产伦一区二区三区 | www.国产在线观看 | 婷婷视频在线 | 国产精品一区二区免费看 | 色婷婷一| 天天操夜夜逼 | 日韩精品在线看 | 2019久久精品| 免费在线观看成人小视频 | 欧美日韩中文在线视频 | 91精品国产99久久久久久久 | 日韩中文字幕免费在线播放 | 在线之家免费在线观看电影 | 久久综合狠狠综合久久激情 | 日韩视频在线不卡 | 日韩欧美在线观看一区二区 | 九九热在线精品视频 | 国产精品原创视频 | 91香蕉视频黄色 | 2019中文最近的2019中文在线 | 日韩在线视频网站 | 国产在线一卡 | 国产在线高清精品 | 国产偷国产偷亚洲清高 | 日韩av中文 | 狠狠干夜夜操天天爽 | 狠狠干成人综合网 | 亚洲国产影院 | 在线观看中文字幕第一页 | 国产精品久久久久久吹潮天美传媒 | 五月天电影免费在线观看一区 | 国产精品美女免费视频 | 娇妻呻吟一区二区三区 | 国产破处视频在线播放 | 福利一区二区三区四区 | 涩涩色亚洲一区 | 久久久久久久久久久久久国产精品 | 美女久久久久久久久久 | 2022中文字幕在线观看 | 国产一区在线观看免费 | 国产精品网站一区二区三区 | 黄色字幕网 | 国内精品国产三级国产aⅴ久 | 色婷婷综合视频在线观看 | 综合国产视频 | 在线观看日本高清mv视频 | 日日夜夜人人天天 | 天天综合久久综合 | www.色国产 | 久久老司机精品视频 | 国产精品欧美久久久久三级 | 91丨九色丨国产丨porny精品 | 国产91小视频 | 91麻豆免费视频 | 中文字幕av一区二区三区四区 | 天堂网中文在线 | 麻豆精品视频在线观看免费 | 日本中文字幕在线播放 | 免费精品久久久 | 99热最新地址 | 国际精品久久久 | 最新色站| 在线导航av | 亚洲精品久久久蜜桃直播 | 日韩va欧美va亚洲va久久 | 成年人网站免费观看 | 中文在线资源 | 日韩欧美高清一区二区三区 | 射久久 | 中文在线中文资源 | 国产又粗又猛又爽又黄的视频先 | 精品久久久久久国产偷窥 | 欧美精品一区在线发布 | 天堂成人在线 | av一级黄| 在线免费观看国产视频 | 天堂中文在线视频 | 久久午夜鲁丝片 | a视频免费在线观看 | 久久99国产精品久久99 | 久久国产精品99久久久久 | 欧美综合久久 | 国产精品久久久久永久免费看 | 国产高清在线一区 | 99色在线| 久久中文字幕在线视频 | 五月天com | 国产免费一区二区三区最新 | av免费看av | 亚洲欧洲日韩在线观看 | 黄色在线观看网站 | 日本一区二区三区免费看 | 日韩最新av | 成人一区二区三区中文字幕 | 激情婷婷综合网 | 去干成人网 | 一区二区av | 久久久久久中文字幕 | 亚洲电影第一页av | 在线视频 国产 日韩 | 婷婷丁香综合 | 91大神dom调教在线观看 | 99精品乱码国产在线观看 | 人人草人人做 | 久久天天躁夜夜躁狠狠躁2022 | 国产在线自 | 亚洲久草视频 | 久久久国产精品免费 | 中文字幕免费观看视频 | 天天激情天天干 | 国产小视频免费在线网址 | 精品国产亚洲在线 | 成人免费观看视频大全 | 亚洲视频aaa | 久久久91精品国产一区二区三区 | 久久视频在线观看中文字幕 | 亚洲国产精品成人精品 | 亚洲精品高清在线 | 免费看色的网站 | 在线观看免费日韩 | 久久久久久久久久久久亚洲 | 日韩乱码在线 | 66av99精品福利视频在线 | 日本精品一区二区在线观看 | 久久精品99国产精品亚洲最刺激 | 99久久精品国产网站 | 国产精品丝袜 | 高清精品久久 | 国产又粗又猛又黄视频 | 99久久精品免费看国产四区 | 亚洲国产手机在线 | 久久久免费在线观看 | 免费高清在线观看成人 | 日本特黄特色aaa大片免费 | 国产高清无线码2021 | 久在线 | 日韩中文字幕在线看 | av成人在线播放 | 亚洲aaa级| 丁香花中文在线免费观看 | 一区二区三区精品在线视频 | 精产嫩模国品一二三区 | 欧美十八 | 精品自拍av | 亚洲第一区在线播放 | 欧美日本在线观看视频 | 久久精品欧美一区 | 久草视频视频在线播放 | 免费成人短视频 | 亚洲专区在线视频 | 婷婷色 亚洲 | 97操操| 国产福利专区 | 国产又粗又猛又爽 | 福利一区二区 | 国模精品一区二区三区 | 国产一区久久久 | 久久成人精品电影 | 色噜噜狠狠色综合中国 | 97精品国产97久久久久久免费 | 免费中文字幕视频 | 成人app在线免费观看 | 在线有码中文字幕 | 最近高清中文字幕在线国语5 | 欧美激情h | 啪啪激情网 | 伊人中文字幕在线 | 欧美男同视频网站 | 国产日产精品一区二区三区四区的观看方式 | 色88久久 | 国产黄色一级大片 | 这里只有精品视频在线观看 | 一级性视频 | 美女黄网站视频免费 | 久久精品国产精品亚洲 | 亚洲经典视频 | 看v片| 久久精品视频4 | 91系列在线观看 | 99久久一区 | 五月婷婷综合色拍 | 精品国产免费一区二区三区五区 | 日本中文字幕一二区观 | 欧美福利视频一区 | 2019中文字幕第一页 | 99久久久国产精品免费观看 | 特黄免费av | zzijzzij亚洲成熟少妇 | 国产粉嫩在线 | 婷婷去俺也去六月色 | av在线播放亚洲 | 成年人免费在线 | 激情五月婷婷激情 | 91福利视频久久久久 | 欧美日韩精品在线一区二区 | 国产精品成人av在线 | 国产无遮挡又黄又爽在线观看 | 天天伊人网 | 欧美在线free | 国产成人福利在线观看 | 天天干天天拍天天操天天拍 | 成人综合婷婷国产精品久久免费 | 中文字幕视频网站 | 久久激情五月激情 | 精品福利在线视频 | 天天干中文字幕 | 99热精品久久 | 国产尤物一区二区三区 | 国产精品 视频 | 午夜精品久久久久久久久久久 | www婷婷 | 国产午夜精品福利视频 | 天天操天天舔天天爽 | 天躁狠狠躁 | 亚洲精品av在线 | 久久久久亚洲精品国产 | 午夜精品三区 | 91pony九色丨交换 | 91系列在线 | 中文字幕免费高 | 中文字幕高清免费日韩视频在线 | 日批网站在线观看 | 九色精品免费永久在线 | 久久男人影院 | 国产视频手机在线 | 精品亚洲一区二区 | 亚洲精品自拍视频在线观看 | 超碰人人超 | 在线观看一级视频 | 深夜免费福利 | 亚洲乱码精品久久久 | 国产精品成人一区二区 | 国产美女精品在线 | 九热在线 | 天天综合网久久 | 日韩免费播放 | 免费日韩电影 | 国产精品原创 | 一区二区视频在线播放 | 精品视频网站 | 米奇四色影视 | 91福利专区 | 97福利视频 | 日本不卡一区二区三区在线观看 | 伊人首页 | 中文字幕一区二区三区乱码在线 | 欧美日韩视频在线观看一区二区 | 国产中文字幕在线播放 | av成人动漫在线观看 | 国产五月天婷婷 | 狠狠操操操 | 永久免费av在线播放 | 91干干干| 久久免费精品一区二区三区 | 超碰97公开 | 九九在线国产视频 | 九九一级片 | 久久国产品 | 日韩av中文在线观看 | 天天av天天 | 超碰人人超 | 久久久久久免费 | 精品久久久久国产免费第一页 | 丁香婷婷激情网 | 欧美激情综合五月色丁香小说 | 国产成人在线综合 | 亚洲97在线 | 亚洲综合在线五月天 | 在线观看中文字幕av | 久久国产精品一区二区三区四区 | 97人人模人人爽人人喊中文字 | 天天操婷婷 | 久草久草视频 | 久久久久国产成人精品亚洲午夜 | 亚洲黄a | 天天干天天操av | 91刺激视频 | 亚洲伦理电影在线 | 国产精品网红福利 | sesese图片 | 日韩一级理论片 | 国产五十路毛片 | 欧洲精品码一区二区三区免费看 | 国产在线a视频 | 特片网久久 | 一区二区三区日韩在线 | 黄色一级大片免费看 | 国产拍揄自揄精品视频麻豆 | 日韩国产高清在线 | 亚洲日本精品视频 | 韩国视频一区二区三区 | 日韩在线观看你懂得 | 在线观看免费版高清版 | 天天操天天操 | 一区二区三区在线免费播放 | 午夜精品一区二区三区在线播放 | 亚洲午夜激情网 | 视频三区在线 | 久久久久国产一区二区三区四区 | 欧美另类69 | 这里有精品在线视频 | av免费在线观看1 | 色婷婷九月| 亚洲一区二区黄色 | 在线色吧 | 激情五月婷婷网 | 在线视频久 | 国产精品美女久久久 | 999热线在线观看 | 日韩不卡高清视频 | 午夜久久久久久久久久影院 | 日韩在线精品一区 | 黄色三级久久 | 国产成人亚洲精品自产在线 | 四虎永久精品在线 | 欧美日韩视频精品 | 91免费观看视频网站 | 天天综合网 天天 | 日韩av电影中文字幕在线观看 | 国产在线观看免 | 国产精品不卡av | 九九视频在线 | 五月综合在线观看 | 五月天激情婷婷 | 色综合咪咪久久网 | 日韩免费在线观看视频 | 在线成人av | 国产成人精品一区二区在线 | 国产在线精品区 | 久久视频一区 | 中文字幕在线观看的网站 | 欧美成年人在线观看 | 色婷婷久久一区二区 | 九九视频这里只有精品 | 亚洲二区精品 | 色综合久久综合网 | 91福利视频久久久久 | 成人中文字幕在线 | 亚洲乱码国产乱码精品天美传媒 | 久久久穴| 国产涩涩在线观看 | 天天久久夜夜 | 国产精品白浆视频 | 国产精品短视频 | 九九国产精品视频 | 久久亚洲国产精品 | 97电影在线观看 | 香蕉视频网址 | 最近中文字幕第一页 | 国产破处视频在线播放 | 色5月婷婷 | av解说在线观看 | 日本精品一区二区三区在线播放视频 | 黄色大全在线观看 | 18做爰免费视频网站 | 欧日韩在线 | 久久电影网站中文字幕 | 中文字幕乱码亚洲精品一区 | 亚洲欧美日韩中文在线 | 91免费视频网站在线观看 | 69欧美视频 | 日韩精品视频在线观看网址 | 国产v在线播放 | 国产日韩精品一区二区三区在线 | 久久精品在线视频 | 一区视频在线 | 亚洲美女免费视频 | 视频在线一区 | 国产第页 | 91精品爽啪蜜夜国产在线播放 | 亚洲精品成人免费 | 蜜臀av性久久久久av蜜臀三区 | 色就是色综合 | 久久成人综合视频 | 国产精品不卡视频 | 超碰在线免费97 | 五月婷婷一区 | 国产手机在线观看视频 | 午夜av剧场| 国产小视频在线观看免费 | 久久99久久精品国产 | 成人av在线直播 | 午夜精品一区二区三区可下载 | 成年人电影毛片 | 午夜精品999| 成人av免费在线观看 | 手机色站| 91视频观看免费 | 国产精品一区二区久久精品爱涩 | 91视频三区 | 日韩成人黄色av | 精品一区精品二区 | 成人在线视 | 五月婷婷六月丁香激情 | 国产区第一页 | 中文字幕在线看视频 | 国产视频在线播放 | 中文字幕在线视频国产 | 夜色资源站国产www在线视频 | 亚洲艳情| 蜜桃av人人夜夜澡人人爽 | 少妇性xxx | bbbbb女女女女女bbbbb国产 | 日av免费 | 久久黄色美女 | 欧美日韩视频一区二区 | 国产高清视频色在线www | 97av视频在线 | 国产成人61精品免费看片 | 亚洲一区欧美激情 | www日韩在线 | 91亚色视频 | 国产精品久久综合 | 国产69精品久久app免费版 | 99久国产| 91中文字幕网 | 伊人婷婷激情 | 婷婷丁香激情综合 | 五月天最新网址 | 成人免费大片黄在线播放 | 色噜噜日韩精品一区二区三区视频 | 亚洲色五月 | 九色91av| 国产热re99久久6国产精品 | 激情影院在线观看 | 精品影院一区二区久久久 | 中文字幕在线观看第三页 | 美女搞黄国产视频网站 | 日韩免费在线观看视频 | 欧美男同视频网站 | 久久国产免费看 | 精品久久网 | 99视频在线免费 | 免费在线观看中文字幕 | 在线观看91精品国产网站 | 欧美色道| 日韩三级不卡 | 亚洲精品小视频在线观看 | av资源免费在线观看 | 免费男女网站 | 国产一线二线三线性视频 | 国产中文字幕视频 | 中文在线免费看视频 | 色婷婷av在线| 激情五月婷婷丁香 | 在线观看av片 | 五月婷婷亚洲 | 国产福利不卡视频 | 日韩免费中文 | 国产不卡免费 | 日韩精品短视频 | 亚洲精品中文在线资源 | 国产成人精品亚洲 | 成人亚洲免费 | 国产精品久久电影网 | 一区二区国产精品 | 国产日韩精品在线 | 91探花国产综合在线精品 | 在线中文字幕播放 | 在线观看免费成人av | 日批视频在线 | 韩国av电影在线观看 | 狠狠色香婷婷久久亚洲精品 | 亚洲经典视频在线观看 | 一级欧美日韩 | 国产福利精品一区二区 | 亚洲 欧洲av| av黄色影院 | 4p变态网欧美系列 | 亚洲国产激情 | 免费欧美精品 | av在线成人 | 99国产精品 | 久久五月婷婷丁香 | 顶级bbw搡bbbb搡bbbb | 蜜臀久久99精品久久久酒店新书 | 91色蜜桃 | 亚洲成人精品 | 日韩影视精品 | 免费黄色在线播放 | 夜夜夜夜猛噜噜噜噜噜初音未来 | 久久国产免费视频 | 天天做日日做天天爽视频免费 | 成年人看片网站 | 91亚洲精品视频 | 国产一区二区三区视频在线 | 国产最新视频在线观看 | 亚洲一二区精品 | 日韩三区在线 | 亚洲伊人色 | 亚洲春色综合另类校园电影 | 日韩激情中文字幕 | 久久人91精品久久久久久不卡 | 国产视频亚洲视频 | 久久亚洲人| 久久婷婷综合激情 | 免费高清看电视网站 | 国产麻豆精品在线观看 | 欧美在线free | 久久狠狠干| 久草在线免费看视频 | 麻豆精品视频 | 国产精品不卡在线观看 | 亚洲综合色站 | 亚洲黄色一级电影 | 黄色视屏免费在线观看 | www91在线观看| 亚洲综合成人婷婷小说 | 91插插影库 | 夜夜骑天天操 | 日韩在线观看视频免费 | 日韩精品一区二区不卡 | 国产精品高潮久久av | 四虎国产精品永久在线国在线 | 一级黄毛片| 精品亚洲免费 | 免费a v在线 | 色婷婷激婷婷情综天天 | 免费裸体视频网 | 亚洲色图色 | 久久久久精 | 国产精品久久久久久久久久久久午夜 | 国产精品电影一区 | 久久精品视频日本 | 中文字幕色综合网 | 国产日本三级 | 96看片| 日本电影黄色 | 免费观看全黄做爰大片国产 | 国产精品久久99综合免费观看尤物 | av电影av在线| 超碰在线天天 | 中文字幕在 | 日韩 在线观看 | 日本美女xx| 深爱五月激情五月 | 99久久国产免费看 | 婷婷丁香花五月天 | 久久激情五月婷婷 | 亚洲成人频道 | 欧美色插| 高清在线一区 | 国产精品女同一区二区三区久久夜 | 国产精品成人国产乱 | 国产精品久久久久久久久久久免费看 | 在线免费日韩 | 免费黄在线看 | 国产中文伊人 | 色婷婷免费 | 久综合网 | 最近高清中文在线字幕在线观看 | 日韩欧美视频在线免费观看 | 国产精品免费久久久 | www激情网 | 黄色天堂在线观看 | 欧美日韩一级视频 | 婷婷丁香六月天 | 国产在线a| 精品国产一区在线观看 | 国产精品成人国产乱一区 | 一区二区三区久久 | 国产丝袜高跟 | 免费精品在线视频 | 99国产在线 | 日韩在线一级 | 色婷婷激情网 | 一区二区激情 | 国产免费黄视频在线观看 | 国产欧美最新羞羞视频在线观看 | 深夜免费福利视频 | 中文字幕91在线 | 国产福利在线不卡 | 日韩丝袜在线观看 | 国产精品嫩草69影院 | 午夜视频在线观看一区二区三区 | 日韩精品久久一区二区 | 亚洲国产成人精品在线 | 92国产精品久久久久首页 | 高清久久久久久 | av片一区二区 | 六月丁香激情综合色啪小说 | 国产不卡网站 | 亚洲三区在线 | 国产精品一区二区三区在线看 | 国产精品激情在线观看 | 国产精品在线看 | 视频福利在线 | 欧美日韩视频一区二区 | 亚洲精品视频在线播放 | www久久久| 91资源在线视频 | 麻豆视频在线看 | 999热线在线观看 | 欧美美女一级片 | 亚洲精品免费在线观看视频 | 国产亚洲观看 | 国产在线观看污片 | 久久久久久久久久久影院 | 中文字幕久久网 | 免费看网站在线 | 色婷婷久久久 | 免费网站在线观看人 | 欧美精品国产精品 | 九草在线观看 | 国产视频一区在线免费观看 | 久久草视频 | 在线看片一区 | 国产成人61精品免费看片 | 亚洲一区久久久 | 欧美日韩国内在线 | 亚洲高清视频在线 | 国产第一页福利影院 | 99久久成人 | 色窝资源 | 91麻豆精品国产自产 | 99精品免费在线 | 中文字幕在线国产精品 | 国产123区在线观看 国产精品麻豆91 | a成人v在线| 在线免费精品视频 | 色久av| 日批在线观看 | 在线观看国产亚洲 | 中文在线www| 国产精品午夜在线 | 精品国产一区二区三区av性色 | 天天久久夜夜 | 国产精品99久久久久久有的能看 | 国产美女免费视频 | 国产精品免费一区二区三区在线观看 | 亚洲成人精品 | 欧美日韩国产二区三区 | 探花在线观看 | 日韩福利在线观看 | 亚洲精品国产精品国自产观看浪潮 | 久久a级片 | 综合久久久久久 | 国产国语在线 | 国产亚洲精品久久久久久无几年桃 | 日日爱网址 | 色91在线视频 | 91精品免费在线观看 | 99re热精品视频 | 嫩草伊人久久精品少妇av | 国产成人久久久久 | 日韩欧美xxxx | 亚洲精品视频播放 | 亚洲国产精品电影在线观看 | 久久国产精品视频 | 一级黄色电影网站 | 天堂资源在线观看视频 | 中国一区二区视频 | 久久dvd| 国产一区二区在线看 | 国产精品第十页 | 欧美午夜理伦三级在线观看 | av黄色成人 | 国产麻豆视频免费观看 | 草莓视频在线观看免费观看 | 天天干一干 | 四虎影视精品成人 | 国产成人精品女人久久久 | 国产成人精品av | 去干成人网 | 免费日韩视频 | 日本精品视频一区二区 | 成人黄色毛片视频 | 国产伦精品一区二区三区无广告 | 九月婷婷人人澡人人添人人爽 | 亚洲va欧洲va国产va不卡 | 天天操夜夜拍 | 天天色天天搞 | 91精品在线免费视频 | 成人免费看片网址 | 在线日韩精品视频 | 99热精品在线观看 | 国产精品久久久久久欧美 | 中文字幕中文字幕 | 久久天天躁狠狠躁夜夜不卡公司 | 一区二区三区国产精品 | 精品国产一区二区三区久久久蜜月 | 午夜免费福利视频 | 蜜臀av性久久久久蜜臀av | 九九九毛片 | 日韩电影中文,亚洲精品乱码 | 欧美嫩草影院 | 亚洲精品在线观看视频 | 中文字幕av全部资源www中文字幕在线观看 | 精品国产一区二区三区久久久蜜月 | 丰满少妇在线 | 丁香九月婷婷 | 五月婷在线视频 | 日韩毛片在线一区二区毛片 | 国产精品久久9 | 成年人黄色在线观看 | 久久草在线精品 | 久久99亚洲精品 | av在线进入 | 国产福利免费看 | 中文字幕精品一区二区精品 | 91在线看网站 | 中国一级片免费看 | 九色91在线视频 | 日韩有码中文字幕在线 | 午夜精品一区二区国产 | 欧美激情视频一区 | 日本精品久久久久中文字幕5 | 久久看毛片 | 香蕉在线观看 | 奇米先锋| 黄p在线播放 | 精品亚洲午夜久久久久91 | 91精品国产乱码久久桃 | 综合国产在线 | 精品国产欧美 | 欧美极品在线播放 | 欧洲亚洲激情 | av短片在线 | 黄色a一级视频 | 99久热在线精品视频成人一区 | 一区二区视频在线免费观看 | 亚洲永久精品在线 | 欧美日本在线视频 | 亚洲日本中文字幕在线观看 | 国产精品一区二区电影 | 麻豆国产视频下载 | 在线观看视频日韩 | 日产乱码一二三区别在线 | 丝袜少妇在线 | 欧美日韩精品综合 | 99久久www | 久久精品一区二区三 | 亚洲激情 欧美激情 | 日韩精品一区二区三区免费观看视频 | 中文字幕亚洲高清 | 久久久资源 | 久久久99精品免费观看 | 99热在| 国产精品wwwwww| 精品国产aⅴ麻豆 | 天天射天天拍 | 最近中文字幕在线中文高清版 | 欧美精品亚洲精品 | 综合天天久久 | 亚洲1区 在线 | 中文字幕在线高清 | 亚洲在线黄色 | 又黄又爽又刺激 | 在线免费国产 | 成人在线播放av | 亚洲欧美日韩精品久久久 | 久久综合狠狠狠色97 | 亚洲免费av一区二区 | 国产精品自在线拍国产 | 九九九免费视频 | 久久综合狠狠综合久久激情 | 国产麻豆精品久久一二三 | 欧美日韩一区二区视频在线观看 | 亚洲精品久久久蜜臀下载官网 | 96精品视频| 免费在线观看成人 | 91在线免费视频观看 | 波多野结衣久久资源 | av超碰免费在线 | 欧美日韩中文国产一区发布 | 91麻豆精品国产91久久久无需广告 | 欧美日韩一级视频 | 97国产超碰在线 | 黄色精品网站 | 92中文资源在线 | 麻豆国产露脸在线观看 | 国产小视频在线观看 | 亚洲女在线 | 亚洲欧洲视频 | 五月天中文字幕 | 成人三级网站在线观看 | 久久99国产视频 | 91成人看片 | 国产久视频 | 久久久这里有精品 | 久久综合九色综合97婷婷女人 | 国产精品久久久久久久久久久免费看 | 在线看片视频 | 国产97视频 | 超碰在线99 | 99精品成人| 久久夜色精品国产亚洲aⅴ 91chinesexxx | 亚洲精品一区二区三区新线路 | 国产高清视频在线免费观看 | 午夜视频在线瓜伦 | 婷婷视频导航 | 91黄色小视频 | 黄色小说视频在线 | 日本久久成人中文字幕电影 | 麻花豆传媒mv在线观看 | 国产精品3 | 99成人精品| 在线观看一二三区 | 国产69精品久久久久久 | 久久综合狠狠 | 中文字幕在线观看免费 | www色网站 | 亚洲午夜精品一区二区三区电影院 | 高清日韩一区二区 | 91九色精品 | 五月婷在线视频 | 91成人免费看 | 亚洲欧美国产日韩在线观看 | 97超碰在线播放 | 国产精品手机在线播放 | 久久久久久久久电影 | 国产在线播放观看 | 日韩精品一区二区三区在线视频 | 91精品入口 | 午夜18视频在线观看 | 四虎国产精品永久在线国在线 | 在线观看精品一区 | 久久精品欧美一区 | 久久99九九99精品 | 激情久久一区二区三区 | 在线观看小视频 | 亚洲综合精品在线 | 日韩国产欧美视频 | 久久精品区 | 91精品色| 91在线免费公开视频 | 天天综合中文 | 五月天久久激情 | 国产黄色特级片 | 国产精品日韩在线观看 | 日韩在线观看一区二区三区 | 黄色片软件网站 | 久久污视频 | 在线中文字幕视频 | 国产精品久久久久久婷婷天堂 | 日韩在观看线 | 亚洲一区尤物 | 91色一区二区三区 | 五月综合激情婷婷 | 久久免费大片 | 久久免费a | 少妇精69xxtheporn | 天天摸夜夜操 | 久草免费资源 | 最近中文字幕免费视频 | 中文字幕一区二区三区精华液 | 久久人人爽人人爽人人片av软件 | 日韩在线免费小视频 | 黄污视频网站大全 | 在线视频 国产 日韩 | 一级片免费观看视频 | 日日操日日干 | 免费观看十分钟 | 97成人资源站 | 国产一级片久久 | 国产精品999久久久 久产久精国产品 | 婷婷色在线播放 |