日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

今日arXiv精选 | 14篇EMNLP 2021最新论文

發布時間:2024/10/8 编程问答 44 豆豆
生活随笔 收集整理的這篇文章主要介紹了 今日arXiv精选 | 14篇EMNLP 2021最新论文 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

?關于?#今日arXiv精選?

這是「AI 學術前沿」旗下的一檔欄目,編輯將每日從arXiv中精選高質量論文,推送給讀者。

Effective Sequence-to-Sequence Dialogue State Tracking

Comment: Accepted at EMNLP 2021

Link:?http://arxiv.org/abs/2108.13990

Abstract

Sequence-to-sequence models have been applied to a wide variety of NLP tasks,but how to properly use them for dialogue state tracking has not beensystematically investigated. In this paper, we study this problem from theperspectives of pre-training objectives as well as the formats of contextrepresentations. We demonstrate that the choice of pre-training objective makesa significant difference to the state tracking quality. In particular, we findthat masked span prediction is more effective than auto-regressive languagemodeling. We also explore using Pegasus, a span prediction-based pre-trainingobjective for text summarization, for the state tracking model. We found thatpre-training for the seemingly distant summarization task works surprisinglywell for dialogue state tracking. In addition, we found that while recurrentstate context representation works also reasonably well, the model may have ahard time recovering from earlier mistakes. We conducted experiments on theMultiWOZ 2.1-2.4 data sets with consistent observations.

Thermostat: A Large Collection of NLP Model Explanations and Analysis Tools

Comment: Accepted to EMNLP 2021 System Demonstrations

Link:?http://arxiv.org/abs/2108.13961

Abstract

In the language domain, as in other domains, neural explainability takes anever more important role, with feature attribution methods on the forefront.Many such methods require considerable computational resources and expertknowledge about implementation details and parameter choices. To facilitateresearch, we present Thermostat which consists of a large collection of modelexplanations and accompanying analysis tools. Thermostat allows easy access toover 200k explanations for the decisions of prominent state-of-the-art modelsspanning across different NLP tasks, generated with multiple explainers. Thedataset took over 10k GPU hours (>one year) to compile; compute time that thecommunity now saves. The accompanying software tools allow to analyseexplanations instance-wise but also accumulatively on corpus level. Users caninvestigate and compare models, datasets and explainers without the need toorchestrate implementation details. Thermostat is fully open source,democratizes explainability research in the language domain, circumventsredundant computations and increases comparability and replicability.

Robust Retrieval Augmented Generation for Zero-shot Slot Filling

Comment: Accepted at EMNLP 2021. arXiv admin note: substantial text overlap ?with arXiv:2104.08610

Link:?http://arxiv.org/abs/2108.13934

Abstract

Automatically inducing high quality knowledge graphs from a given collectionof documents still remains a challenging problem in AI. One way to make headwayfor this problem is through advancements in a related task known as slotfilling. In this task, given an entity query in form of [Entity, Slot, ?], asystem is asked to fill the slot by generating or extracting the missing valueexploiting evidence extracted from relevant passage(s) in the given documentcollection. The recent works in the field try to solve this task in anend-to-end fashion using retrieval-based language models. In this paper, wepresent a novel approach to zero-shot slot filling that extends dense passageretrieval with hard negatives and robust training procedures for retrievalaugmented generation models. Our model reports large improvements on both T-RExand zsRE slot filling datasets, improving both passage retrieval and slot valuegeneration, and ranking at the top-1 position in the KILT leaderboard.Moreover, we demonstrate the robustness of our system showing its domainadaptation capability on a new variant of the TACRED dataset for slot filling,through a combination of zero/few-shot learning. We release the source code andpre-trained models.

Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning

Comment: Accepted by EMNLP2021 main conference

Link:?http://arxiv.org/abs/2108.13888

Abstract

\textbf{P}re-\textbf{T}rained \textbf{M}odel\textbf{s} have been widelyapplied and recently proved vulnerable under backdoor attacks: the releasedpre-trained weights can be maliciously poisoned with certain triggers. When thetriggers are activated, even the fine-tuned model will predict pre-definedlabels, causing a security threat. These backdoors generated by the poisoningmethods can be erased by changing hyper-parameters during fine-tuning ordetected by finding the triggers. In this paper, we propose a strongerweight-poisoning attack method that introduces a layerwise weight poisoningstrategy to plant deeper backdoors; we also introduce a combinatorial triggerthat cannot be easily detected. The experiments on text classification tasksshow that previous defense methods cannot resist our weight-poisoning method,which indicates that our method can be widely applied and may provide hints forfuture model robustness studies.

When Retriever-Reader Meets Scenario-Based Multiple-Choice Questions

Comment: 10 pages, accepted to Findings of EMNLP 2021

Link:?http://arxiv.org/abs/2108.13875

Abstract

Scenario-based question answering (SQA) requires retrieving and readingparagraphs from a large corpus to answer a question which is contextualized bya long scenario description. Since a scenario contains both keyphrases forretrieval and much noise, retrieval for SQA is extremely difficult. Moreover,it can hardly be supervised due to the lack of relevance labels of paragraphsfor SQA. To meet the challenge, in this paper we propose a jointretriever-reader model called JEEVES where the retriever is implicitlysupervised only using QA labels via a novel word weighting mechanism. JEEVESsignificantly outperforms a variety of strong baselines on multiple-choicequestions in three SQA datasets.

Contrastive Domain Adaptation for Question Answering using Limited Text Corpora

Comment: Accepted to EMNLP 2021

Link:?http://arxiv.org/abs/2108.13854

Abstract

Question generation has recently shown impressive results in customizingquestion answering (QA) systems to new domains. These approaches circumvent theneed for manually annotated training data from the new domain and, instead,generate synthetic question-answer pairs that are used for training. However,existing methods for question generation rely on large amounts of syntheticallygenerated datasets and costly computational resources, which render thesetechniques widely inaccessible when the text corpora is of limited size. Thisis problematic as many niche domains rely on small text corpora, whichnaturally restricts the amount of synthetic data that can be generated. In thispaper, we propose a novel framework for domain adaptation called contrastivedomain adaptation for QA (CAQA). Specifically, CAQA combines techniques fromquestion generation and domain-invariant learning to answer out-of-domainquestions in settings with limited text corpora. Here, we train a QA system onboth source data and generated data from the target domain with a contrastiveadaptation loss that is incorporated in the training objective. By combiningtechniques from question generation and domain-invariant learning, our modelachieved considerable improvements compared to state-of-the-art baselines.

Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience

Comment: EMNLP 2021 Pre-print

Link:?http://arxiv.org/abs/2108.13759

Abstract

Pretrained transformer-based models such as BERT have demonstratedstate-of-the-art predictive performance when adapted into a range of naturallanguage processing tasks. An open problem is how to improve the faithfulnessof explanations (rationales) for the predictions of these models. In thispaper, we hypothesize that salient information extracted a priori from thetraining data can complement the task-specific information learned by the modelduring fine-tuning on a downstream task. In this way, we aim to help BERT notto forget assigning importance to informative input tokens when makingpredictions by proposing SaLoss; an auxiliary loss function for guiding themulti-head attention mechanism during training to be close to salientinformation extracted a priori using TextRank. Experiments for explanationfaithfulness across five datasets, show that models trained with SaLossconsistently provide more faithful explanations across four different featureattribution methods compared to vanilla BERT. Using the rationales extractedfrom vanilla BERT and SaLoss models to train inherently faithful classifiers,we further show that the latter result in higher predictive performance indownstream tasks.

Plan-then-Generate: Controlled Data-to-Text Generation via Planning

Comment: Accepted to Findings of EMNLP 2021

Link:?http://arxiv.org/abs/2108.13740

Abstract

Recent developments in neural networks have led to the advance indata-to-text generation. However, the lack of ability of neural models tocontrol the structure of generated output can be limiting in certain real-worldapplications. In this study, we propose a novel Plan-then-Generate (PlanGen)framework to improve the controllability of neural data-to-text models.Extensive experiments and analyses are conducted on two benchmark datasets,ToTTo and WebNLG. The results show that our model is able to control both theintra-sentence and inter-sentence structure of the generated output.Furthermore, empirical comparisons against previous state-of-the-art methodsshow that our model improves the generation quality as well as the outputdiversity as judged by human and automatic evaluations.

Automatic Rule Generation for Time Expression Normalization

Comment: Accepted to Findings of EMNLP 2021

Link:?http://arxiv.org/abs/2108.13658

Abstract

The understanding of time expressions includes two sub-tasks: recognition andnormalization. In recent years, significant progress has been made in therecognition of time expressions while research on normalization has laggedbehind. Existing SOTA normalization methods highly rely on rules or grammarsdesigned by experts, which limits their performance on emerging corpora, suchas social media texts. In this paper, we model time expression normalization asa sequence of operations to construct the normalized temporal value, and wepresent a novel method called ARTime, which can automatically generatenormalization rules from training data without expert interventions.Specifically, ARTime automatically captures possible operation sequences fromannotated data and generates normalization rules on time expressions withcommon surface forms. The experimental results show that ARTime cansignificantly surpass SOTA methods on the Tweets benchmark, and achievescompetitive results with existing expert-engineered rule methods on theTempEval-3 benchmark.

Discretized Integrated Gradients for Explaining Language Models

Comment: Accepted in EMNLP 2021

Link:?http://arxiv.org/abs/2108.13654

Abstract

As a prominent attribution-based explanation algorithm, Integrated Gradients(IG) is widely adopted due to its desirable explanation axioms and the ease ofgradient computation. It measures feature importance by averaging the model'soutput gradient interpolated along a straight-line path in the input dataspace. However, such straight-line interpolated points are not representativeof text data due to the inherent discreteness of the word embedding space. Thisquestions the faithfulness of the gradients computed at the interpolated pointsand consequently, the quality of the generated explanations. Here we proposeDiscretized Integrated Gradients (DIG), which allows effective attributionalong non-linear interpolation paths. We develop two interpolation strategiesfor the discrete word embedding space that generates interpolation points thatlie close to actual words in the embedding space, yielding more faithfulgradient computation. We demonstrate the effectiveness of DIG over IG throughexperimental and human evaluations on multiple sentiment classificationdatasets. We provide the source code of DIG to encourage reproducible research.

T3-Vis: a visual analytic framework for Training and fine-Tuning Transformers in NLP

Comment: 10 pages, 4 figures, accepted to EMNLP 2021 System Demonstration

Link:?http://arxiv.org/abs/2108.13587

Abstract

Transformers are the dominant architecture in NLP, but their training andfine-tuning is still very challenging. In this paper, we present the design andimplementation of a visual analytic framework for assisting researchers in suchprocess, by providing them with valuable insights about the model's intrinsicproperties and behaviours. Our framework offers an intuitive overview thatallows the user to explore different facets of the model (e.g., hidden states,attention) through interactive visualization, and allows a suite of built-inalgorithms that compute the importance of model components and different partsof the input sequence. Case studies and feedback from a user focus groupindicate that the framework is useful, and suggest several improvements.

Scheduled Sampling Based on Decoding Steps for Neural Machine Translation

Comment: Code is at https://github.com/Adaxry/ss_on_decoding_steps. To appear ?in EMNLP-2021 main conference. arXiv admin note: text overlap with ?arXiv:2107.10427

Link:?http://arxiv.org/abs/2108.12963

Abstract

Scheduled sampling is widely used to mitigate the exposure bias problem forneural machine translation. Its core motivation is to simulate the inferencescene during training by replacing ground-truth tokens with predicted tokens,thus bridging the gap between training and inference. However, vanillascheduled sampling is merely based on training steps and equally treats alldecoding steps. Namely, it simulates an inference scene with uniform errorrates, which disobeys the real inference scene, where larger decoding stepsusually have higher error rates due to error accumulations. To alleviate theabove discrepancy, we propose scheduled sampling methods based on decodingsteps, increasing the selection chance of predicted tokens with the growth ofdecoding steps. Consequently, we can more realistically simulate the inferencescene during training, thus better bridging the gap between training andinference. Moreover, we investigate scheduled sampling based on both trainingsteps and decoding steps for further improvements. Experimentally, ourapproaches significantly outperform the Transformer baseline and vanillascheduled sampling on three large-scale WMT tasks. Additionally, our approachesalso generalize well to the text summarization task on two popular benchmarks.

Distilling the Knowledge of Large-scale Generative Models into Retrieval Models for Efficient Open-domain Conversation

Comment: EMNLP21-Findings

Link:?http://arxiv.org/abs/2108.12582

Abstract

Despite the remarkable performance of large-scale generative models inopen-domain conversation, they are known to be less practical for buildingreal-time conversation systems due to high latency. On the other hand,retrieval models could return responses with much lower latency but showinferior performance to the large-scale generative models since theconversation quality is bounded by the pre-defined response set. To takeadvantage of both approaches, we propose a new training method called G2R(Generative-to-Retrieval distillation) that preserves the efficiency of aretrieval model while leveraging the conversational ability of a large-scalegenerative model by infusing the knowledge of the generative model into theretrieval model. G2R consists of two distinct techniques of distillation: thedata-level G2R augments the dialogue dataset with additional responsesgenerated by the large-scale generative model, and the model-level G2Rtransfers the response quality score assessed by the generative model to thescore of the retrieval model by the knowledge distillation loss. Throughextensive experiments including human evaluation, we demonstrate that ourretrieval-based conversation system trained with G2R shows a substantiallyimproved performance compared to the baseline retrieval model while showingsignificantly lower inference latency than the large-scale generative models.

Few-Shot Table-to-Text Generation with Prototype Memory

Comment: Accepted to Findings of EMNLP 2021

Link:?http://arxiv.org/abs/2108.12516

Abstract

Neural table-to-text generation models have achieved remarkable progress onan array of tasks. However, due to the data-hungry nature of neural models,their performances strongly rely on large-scale training examples, limitingtheir applicability in real-world applications. To address this, we propose anew framework: Prototype-to-Generate (P2G), for table-to-text generation underthe few-shot scenario. The proposed framework utilizes the retrievedprototypes, which are jointly selected by an IR system and a novel prototypeselector to help the model bridging the structural gap between tables andtexts. Experimental results on three benchmark datasets with threestate-of-the-art models demonstrate that the proposed framework significantlyimproves the model performance across various evaluation metrics.

·

·

總結

以上是生活随笔為你收集整理的今日arXiv精选 | 14篇EMNLP 2021最新论文的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

人人射人人插 | 综合在线亚洲 | 欧美巨乳网 | av国产在线观看 | 在线国产一区二区 | 成人黄大片 | 日韩在线观看视频一区二区三区 | 中文字幕一区二区三区视频 | 欧美激情综合网 | 久久99久久99精品中文字幕 | 国产日韩av在线 | 欧美在线91 | 成年人免费电影在线观看 | 操综合| 亚洲国产剧情 | www操操操 | www.夜夜骑.com | 91在线你懂的 | 91视频免费观看 | 免费看国产黄色 | 国产视频不卡一区 | 国产视频综合在线 | 人人干免费 | 国产精品精品久久久 | 国产精品一区二 | 免费在线激情视频 | av福利在线 | 欧美日韩国产一区二区在线观看 | 日韩乱码中文字幕 | 国产馆在线播放 | 九九热免费精品视频 | 九色91在线| 中文字幕日韩一区二区三区不卡 | 色综合久久久久久中文网 | 正在播放 国产精品 | 午夜影院在线观看18 | 97网| 色综合激情网 | 97爱爱爱| 美女免费视频观看网站 | 91精品在线免费观看视频 | 国产在线观看黄 | 久草在线欧美 | 日韩在线短视频 | 日日射av| 国产精品久久久久久电影 | 日韩高清一区在线 | 伊人五月综合 | 国产成人精品一区二区 | 日韩啪啪小视频 | 国产在线观看国语版免费 | 在线观看香蕉视频 | 一区二区三区在线观看免费 | 久久精品五月 | 天天伊人狠狠 | 粉嫩高清一区二区三区 | 成人av资源在线 | 一本一本久久a久久精品综合小说 | 丁香婷婷综合激情 | 91九色老| 草久视频在线 | 国产精品观看在线亚洲人成网 | 一区中文字幕 | 五月激情av | 超碰在线人人爱 | 五月香视频在线观看 | av在线一级| 91在线产啪 | 国产精品女人网站 | 最近最新mv字幕免费观看 | 99视频精品全部免费 在线 | 国产日韩在线一区 | 日日爱av| 日本女人在线观看 | 91精品视频在线免费观看 | 欧美国产不卡 | 国产亚洲在 | 国产一区二区久久久 | 久久精国产 | 999久久久久久久久6666 | 91资源在线播放 | 国产a视频免费观看 | 91看片在线免费观看 | 欧美精品小视频 | 亚洲成人午夜在线 | 黄色国产高清 | 一级黄色毛片 | 成年人在线观看视频免费 | 97免费在线观看 | 国产精品美女视频 | 亚洲国产美女精品久久久久∴ | 久 久久影院 | 一区二区三区动漫 | 日韩久久电影 | 亚洲精品动漫久久久久 | 深夜免费网站 | 伊人五月天.com | 免费麻豆| 成片视频在线观看 | 久久久久 免费视频 | 天天操天天干天天干 | 一级性av| 亚洲高清不卡av | 亚洲精品视频在线免费 | 免费在线观看日韩 | 欧美精品久久久久久久久老牛影院 | 欧美极度另类性三渗透 | 99在线精品观看 | 干av在线 | 久久免费黄色大片 | 欧美精品一区在线 | 在线久热 | 免费观看av | 日本一区二区不卡高清 | 西西4444www大胆无视频 | 97在线观视频免费观看 | 麻豆一区在线观看 | 99欧美视频| 丝袜+亚洲+另类+欧美+变态 | 91传媒在线 | 久久精品综合网 | 黄色片视频免费 | 日韩成人看片 | av网在线观看 | 99久免费精品视频在线观看 | 草在线视频| 国产精品一区二区三区免费视频 | 免费在线播放黄色 | 久久在线精品视频 | 国产精品久久久久久a | 日韩在线精品一区 | 91av亚洲 | 五月天久久综合网 | 成人av在线播放网站 | 一本色道久久综合亚洲二区三区 | 久久99精品国产 | 婷婷av网站 | 99精品国产一区二区三区不卡 | 97成人资源 | 国产精品免费不卡 | 麻豆视频免费入口 | 91九色精品女同系列 | 激情欧美一区二区三区免费看 | 免费黄色av | 中文字幕免费高清在线 | 日本在线观看一区 | 久久天天躁狠狠躁亚洲综合公司 | 免费在线观看av片 | 日本中文字幕一二区观 | 日韩欧美一级二级 | 国产视频二区三区 | av免费在线观看1 | 日本免费久久高清视频 | 8x成人免费视频 | 天天操狠狠操 | 午夜精品久久久久久久99 | 西西人体4444www高清视频 | 国产精选视频 | 国产三级av在线 | 1区2区视频 | 中文在线最新版天堂 | 免费视频国产 | 亚洲国产理论片 | 午夜精品一区二区三区在线观看 | 久久av网址 | av在线影片 | 在线性视频日韩欧美 | 超碰在线人 | 日本久久99 | 精品在线一区二区三区 | 视频福利在线 | 五月天精品视频 | 欧美日韩国产一区 | 精品久久久久久久久亚洲 | 日日干av| 亚洲黄色在线免费观看 | 91麻豆精品国产91久久久久 | 亚洲精品在线一区二区三区 | 国产特级毛片 | 久久久久影视 | 中文字幕电影高清在线观看 | 在线观看免费91 | 国产精品刺激对白麻豆99 | 国产精品二区三区 | 91在线观看视频网站 | 久久久久久久久久久久av | 欧美日韩一区二区三区在线免费观看 | 日韩欧美在线免费观看 | 久久高清精品 | 免费视频你懂得 | 国产h在线播放 | 九九热在线视频免费观看 | 成人小视频在线观看免费 | 天天操操 | av在线看网站 | 中文字幕电影高清在线观看 | 99r在线| 中文字幕欧美日韩va免费视频 | 黄色片免费看 | 麻豆94tv免费版 | 在线观看91精品视频 | 国产成人久久精品亚洲 | 色婷婷久久久综合中文字幕 | 国产又粗又猛又爽又黄的视频先 | 一区二区三区四区五区六区 | 国产一区二三区好的 | 91精品视频免费在线观看 | 成人免费观看完整版电影 | 99久久网站| 综合色站| 99久久精品免费看国产一区二区三区 | 亚洲黄污| 一本一道久久a久久精品 | 成年人看片网站 | 99久久电影 | 高清一区二区 | 91视频在线观看下载 | 国产黄色美女 | 麻豆国产视频下载 | 久久国产亚洲精品 | 久久a免费视频 | 欧洲精品亚洲精品 | 国产亚洲精品久久久久久久久久久久 | 一区二区中文字幕在线播放 | 国产视频网站在线观看 | 久久久久国产一区二区三区四区 | 九九免费在线观看 | 天天综合天天做天天综合 | 久久久久黄 | 一区二区精品在线观看 | 久久精品精品电影网 | 黄色大片视频网站 | 免费开视频| 麻豆一二三精选视频 | 久久五月婷婷丁香社区 | 五月激情站 | 日韩在线观看一区二区三区 | 国产91在线看| 日韩女同av| 97精品久久人人爽人人爽 | 国产视频亚洲视频 | 精品视频免费播放 | 九九精品视频在线看 | 草久中文字幕 | 色婷婷综合在线 | 丁香影院在线 | 国产在线永久 | 西西444www大胆高清图片 | 69av视频在线| 色综合久久综合网 | 日韩久久午夜一级啪啪 | 99久久久免费视频 | 久久se视频 | 日操操| 日日夜夜婷婷 | 久久不射影院 | www.亚洲精品 | 久久免费视频国产 | 免费观看日韩av | 91精品国产91 | 日韩午夜剧场 | 天天摸天天操天天舔 | 在线免费观看视频你懂的 | 国产日韩精品一区二区在线观看播放 | 免费99视频 | 天堂在线一区 | 综合色在线 | 男女精品久久 | 97在线观看免费高清 | 久久成年人网站 | 亚洲精品一区二区三区新线路 | 国产 在线 日韩 | 人人躁 | 天天干天天玩天天操 | 国产视频一二三 | 日韩精品一区二区三区不卡 | 精品久久福利 | 一区二区视频电影在线观看 | 国产高清在线一区 | 欧美a级片网站 | 欧美日韩国产综合网 | 色婷在线| 国产成人三级在线 | 色婷婷福利视频 | 在线观看www91 | 午夜精品福利一区二区三区蜜桃 | www.午夜视频| 四虎在线影视 | 亚洲欧美色婷婷 | 色诱亚洲精品久久久久久 | 免费网站在线观看人 | 欧美日韩一区二区三区不卡 | 国产中文字幕第一页 | 日韩精品视频免费在线观看 | 超薄丝袜一二三区 | 午夜av电影| 久久精品老司机 | 日本黄色免费电影网站 | 国产日韩视频在线 | 91精品一区国产高清在线gif | 夜夜操天天操 | 国产日产精品一区二区三区四区的观看方式 | av免费网 | 欧美天天综合网 | 日韩欧美极品 | 丁香婷婷综合网 | 天天躁天天躁天天躁婷 | 三级毛片视频 | 日韩欧美网址 | 黄色资源在线 | 婷婷久久久 | 一区二区三区韩国免费中文网站 | 精品视频免费播放 | 婷婷激情综合 | 国产成人黄色av | 日韩精品久久久免费观看夜色 | 麻豆久久| 日韩中文字幕在线观看 | 一区二区三区 亚洲 | 国产午夜不卡 | 国产又粗又硬又爽的视频 | 婷婷午夜 | 国产精品1000| 91av在线不卡 | 九九色视频 | 午夜影院先 | 国产精品一区二区三区电影 | 香蕉视频91 | 久草99| 免费色av | 丁香婷婷久久久综合精品国产 | 久久久精品国产一区二区电影四季 | 黄色在线看网站 | 区一区二区三在线观看 | 久久黄色片 | 久草在线视频网站 | 国产精品成人免费精品自在线观看 | 日韩欧美综合视频 | 欧美久久久久 | 精品一区二区三区久久 | 美女黄频视频大全 | 一区二区三区在线观看免费 | 日本丰满少妇免费一区 | 国产福利一区二区在线 | 久久成人资源 | 国产精品免费久久久久影院仙踪林 | 久久免费资源 | 国产黄色看片 | 天天干天天拍天天操 | 九色精品免费永久在线 | 国产精品成人品 | 一级片免费在线 | 四虎国产永久在线精品 | 视频二区在线视频 | 久久精品视频网 | 中国成人一区 | 国产第一页在线播放 | 美女久久久久久久久久 | 国产精品毛片一区二区在线 | 亚洲爱爱视频 | 欧美成人xxx | 在线亚洲欧美视频 | 91资源在线观看 | 973理论片235影院9 | 伊人av综合 | 国产福利精品视频 | 亚洲一区二区三区在线看 | 免费精品人在线二线三线 | 中文字幕在线观看91 | 国产三级午夜理伦三级 | 天天看天天操 | 国产亚洲人成网站在线观看 | 精品国产色| 久久怡红院| 天天插天天狠天天透 | 色综合久久五月天 | 97视频在线观看视频免费视频 | 激情欧美网 | 亚洲aaa毛片 | 久久久久久久久久久久电影 | 99久久精品国产毛片 | 免费久久网 | 久久在线看 | 国产成人精品aaa | 日精品| 99综合久久 | 亚洲精品www | 久久人人爽爽人人爽人人片av | 伊人色**天天综合婷婷 | 日本黄色片一区二区 | 91资源在线 | 国产一级在线播放 | 久久视频在线观看 | 免费看一级 | 在线视频欧美亚洲 | 99精品国产一区二区 | 日本性xxx| 91av蜜桃 | 99热播精品| 狠狠的日日 | 激情欧美一区二区三区 | 亚洲最新在线视频 | 久久人人97超碰com | 91精品一区二区三区久久久久久 | 国内精品在线观看视频 | 91亚洲网| 日韩国产精品一区 | 日韩电影黄色 | 国产精品毛片久久 | 黄色成人av| 天堂在线成人 | 亚洲精品理论 | 一区二区精品久久 | 亚洲干视频在线观看 | 99久久久久国产精品免费 | 国产资源在线播放 | 成人亚洲网 | 五月综合久久 | 在线免费视 | 欧美精品在线视频观看 | 黄色一级网 | 色综合色综合久久综合频道88 | 亚洲乱码久久久 | 中文字幕你懂的 | 欧美亚洲久久 | 超碰在线亚洲 | 亚洲国产成人高清精品 | 九九久久国产 | 在线a人片免费观看视频 | 精品一区二区在线免费观看 | 国产123区在线观看 国产精品麻豆91 | 国产精品资源 | 亚洲一区久久久 | 蜜桃av久久久亚洲精品 | 久久免费视频在线 | 久久精品国产免费观看 | 成人亚洲综合 | 精品一区二区在线看 | 色婷婷在线观看视频 | 久久a免费视频 | 亚洲乱码中文字幕综合 | 999久久久免费精品国产 | 婷婷激情av| 中文字幕av全部资源www中文字幕在线观看 | 国产精品一区二区吃奶在线观看 | 日韩欧美在线观看 | 日韩在线观看高清 | 亚洲视频精品 | 黄色h在线观看 | 久久综合色播五月 | 国产在线播放一区二区 | 欧美va天堂在线电影 | 中文字幕专区高清在线观看 | 婷婷久久婷婷 | 日韩在线一二三区 | 国产精品久久久久久久午夜 | 中文在线a∨在线 | 4438全国亚洲精品观看视频 | 免费91麻豆精品国产自产在线观看 | 五月天久久 | 麻豆视传媒官网免费观看 | 国产一区高清在线观看 | 黄色影院在线免费观看 | 91久久国产精品 | 人人爽夜夜爽 | 久久国产精品99久久人人澡 | 久久久午夜精品理论片中文字幕 | 国产精品久久久av | 国产成人无码AⅤ片在线观 日韩av不卡在线 | 亚洲精品国产精品国自产在线 | 国产精品1区2区3区在线观看 | 日本黄色免费观看 | 国产精品久久久久久久久费观看 | 国内久久看 | 人人草在线视频 | av免费看在线 | 97韩国电影 | 亚洲欧洲精品一区二区精品久久久 | www.色在线| 日韩精品一区二区三区丰满 | 91av视频在线播放 | 韩国av免费在线 | 偷拍精偷拍精品欧洲亚洲网站 | 中文字幕欧美日韩va免费视频 | 欧美精品一区二区性色 | 婷五月天激情 | 欧美网址在线观看 | 五月亚洲综合 | 天堂在线视频中文网 | 三级黄色网址 | 91亚洲精品久久久 | 中文字幕乱在线伦视频中文字幕乱码在线 | 91精品老司机久久一区啪 | 国产明星视频三级a三级点| 久草在线视频免费资源观看 | 2022国产精品视频 | 国产成人一区二区三区 | 91视频三区 | 久久99深爱久久99精品 | 久久免费精品国产 | 欧美日韩国产色综合一二三四 | 国产91在线观 | 日韩激情精品 | 日日婷婷夜日日天干 | 久久人人爽人人爽人人片 | 国产区免费| 伊人中文网 | 91在线视频免费播放 | 日本不卡视频 | 成人一级免费视频 | 鲁一鲁影院 | 久久免费观看少妇a级毛片 久久久久成人免费 | 免费日韩精品 | 欧美日韩国产综合一区二区 | 国产网站色 | 国产在线精品一区二区三区 | 欧美一区日韩精品 | 九九免费观看全部免费视频 | 久久久免费 | 久久久久99精品成人片三人毛片 | a天堂中文在线 | 久久久免费毛片 | 国产精品久一 | 99精品国产在热久久下载 | 午夜av不卡 | 久久久2o19精品 | 国产精品免费一区二区三区在线观看 | 人人讲 | 午夜精品麻豆 | 超碰人人干人人 | 亚洲欧美日韩在线看 | 日本黄色黄网站 | 国产又粗又猛又黄又爽的视频 | 久久国语露脸国产精品电影 | 婷婷色中文网 | 日本黄色免费播放 | 日韩在线视频免费播放 | 久久草精品 | 在线成人中文字幕 | 天天干天天操av | 国产福利一区在线观看 | 国产麻豆果冻传媒在线观看 | 午夜视频色 | 久久精品三级 | 免费99精品国产自在在线 | 日本大片免费观看在线 | 精品视频www | 国产手机av在线 | 91精品资源| 国产精品一区二区三区视频免费 | 综合影视 | 正在播放亚洲精品 | 狠狠色丁香婷婷综合基地 | 国产在线精品一区二区三区 | 麻豆视频国产 | 精品久久网 | 色婷婷在线视频 | 精品国产一区二区在线 | 免费a v观看 | 婷婷丁香在线 | 欧美色黄 | 麻豆激情电影 | 日韩一级成人av | 天天操天天射天天操 | 久久99精品久久久久婷婷 | 亚洲精品视频网站在线观看 | 日本午夜免费福利视频 | 天天摸天天舔天天操 | 超碰在线中文字幕 | 日韩高清成人 | 最近中文字幕 | 色a综合 | 精品中文字幕在线播放 | 欧美91精品国产自产 | 午夜av大片 | 亚洲专区 国产精品 | 色爱成人网 | 超碰97久久| 亚洲视频网站在线观看 | 久久久久9999亚洲精品 | 中文字幕在线观看完整版电影 | 国产精品婷婷午夜在线观看 | 色婷婷狠狠五月综合天色拍 | 亚洲网站在线看 | 亚洲aaa毛片 | 激情在线网址 | 狠狠综合久久av | 97碰碰视频 | 国产日本高清 | 欧美色道| 国产资源中文字幕 | 中文字幕在线观看播放 | 国产99久久久国产精品免费看 | 亚洲精品视频大全 | 一级片黄色片网站 | 夜夜躁日日躁狠狠躁 | 永久免费观看视频 | 九九视频在线观看视频6 | 六月丁香综合网 | 毛片视频网址 | 黄色片网站 | 新版资源中文在线观看 | 免费视频91蜜桃 | 精品久久一区二区三区 | 久久视频国产精品免费视频在线 | 色偷偷中文字幕 | www.久草.com| 一区二区三区在线免费观看 | 国产一区二区电影在线观看 | 国产一区二区精品91 | 97福利在线观看 | 久久高清 | 韩国av免费 | 成人免费在线视频观看 | 99999精品视频 | 四虎欧美 | 午夜日b视频 | 亚洲精品在线一区二区三区 | 国产视频在线观看一区 | 久久99精品一区二区三区三区 | 日本中文字幕在线免费观看 | 五月天激情婷婷 | 国产成人久久精品亚洲 | 亚洲精品福利在线 | 在线观看黄网站 | 国产在线精品国自产拍影院 | 中文字幕在线观看你懂的 | 午夜久久福利视频 | 91在线超碰| 午夜精品久久久久久99热明星 | 高清av中文字幕 | av免费在线免费观看 | av激情五月 | 99精品系列 | 久久久久国产精品一区二区 | 国际精品久久 | 久久久久亚洲精品 | 亚洲最新视频在线 | 日女人电影 | 国产精品96久久久久久吹潮 | av一级片| 天天艹天天干天天 | 国产日本三级 | 国产精品久久久777 成人手机在线视频 | 国产明星视频三级a三级点| 在线免费黄色av | 日本三级久久久 | 国模一二三区 | 日韩影视在线 | 亚洲欧美日韩一级 | 久久精品久久国产 | 欧美成天堂网地址 | 韩国av免费观看 | 国内精品视频一区二区三区八戒 | 国产精品久久电影观看 | 日韩黄色中文字幕 | 日本视频高清 | 免费观看一级 | 天天干天天搞天天射 | 精品国产一区二区三区久久久蜜月 | 91试看 | 激情婷婷 | 97视频亚洲| 亚洲激情在线观看 | 亚洲区另类春色综合小说校园片 | 成人一区二区在线 | 免费av影视| 欧美色精品天天在线观看视频 | 九九九热精品免费视频观看 | 正在播放亚洲精品 | 在线免费视频一区 | 日韩中文字幕免费在线播放 | 91免费高清 | 国产伦精品一区二区三区高清 | 国产高清在线免费观看 | 亚洲一区二区麻豆 | 在线视频电影 | www日韩精品| 久久久久这里只有精品 | 国产高清 不卡 | 亚洲有 在线 | 五月宗合网 | 日韩精品一二三 | a在线v| 日本中文字幕视频 | av午夜电影| 久久久www成人免费毛片麻豆 | 免费午夜网站 | 午夜精品久久久久久99热明星 | 99成人精品 | 精品国偷自产国产一区 | 中文字幕文字幕一区二区 | 久久国产精品久久w女人spa | 日韩在线小视频 | 久久久这里有精品 | 亚洲一区二区三区四区精品 | 日韩高清www | 91麻豆精品国产91久久久久久久久 | 在线欧美a| 久久久久免费电影 | 国产精久久 | 国产精品一区免费看8c0m | 91精品国产三级a在线观看 | 操处女逼 | 色丁香婷婷| 国产视频一区二区三区在线 | 亚洲欧美日韩中文在线 | 成人久久 | 日韩视频图片 | 国产精品成人一区二区三区吃奶 | 成人午夜免费剧场 | 国产亚洲精品久久久久动 | 婷婷av资源 | 99在线热播精品免费99热 | 亚洲美女在线国产 | 久久免视频 | 久久久免费毛片 | 成年人在线视频观看 | 在线免费观看视频你懂的 | 久久久在线| 在线观看精品一区 | 天天色天天骑天天射 | 国产91精品一区二区麻豆网站 | 91精品网站在线观看 | 国产一区二区电影在线观看 | 超碰人人乐 | 九九热视频在线播放 | 国产精品福利视频 | 久久亚洲欧美日韩精品专区 | 丁香激情综合久久伊人久久 | av在线一二三区 | 国产999精品久久久久久麻豆 | 色综合久久久久久中文网 | 91av播放 | 成人黄色在线观看视频 | 久久手机免费观看 | www.狠狠操.com | 亚洲天堂网视频在线观看 | 懂色av懂色av粉嫩av分享吧 | 高清不卡一区二区三区 | 麻豆国产精品永久免费视频 | 日韩午夜av| 91久久精品一区二区二区 | 亚洲人人射 | 九九色综合| 天天射成人 | 91av视频免费观看 | 日韩av在线看 | 国产老太婆免费交性大片 | 久久永久视频 | 国产做aⅴ在线视频播放 | 日日夜夜狠狠操 | 六月丁香激情综合 | 亚洲一区二区三区精品在线观看 | 日本成人中文字幕在线观看 | 国产青青青 | 伊人伊成久久人综合网小说 | 高清av免费看| 成人毛片一区二区三区 | 日韩精品视频免费 | av国产在线观看 | 97超碰国产精品 | 欧美精品国产精品 | 久久综合网色—综合色88 | 香蕉成人在线视频 | 天天曰夜夜操 | 日韩免费在线观看 | 欧美一二三区在线播放 | 麻豆国产精品一区二区三区 | 五月婷视频 | 欧美日韩性生活 | 最近最新最好看中文视频 | 久久精品资源 | 欧洲亚洲精品 | 亚洲成年人免费网站 | 黄色网址中文字幕 | 91日韩在线播放 | 成 人 黄 色 视频播放1 | 日韩av网页 | 国产一区二区三区高清播放 | 国产va饥渴难耐女保洁员在线观看 | www久久精品| 国产男女无遮挡猛进猛出在线观看 | 91国内在线视频 | 日色在线视频 | 成人禁用看黄a在线 | 亚洲精品国产自产拍在线观看 | 二区三区视频 | 五月婷香蕉久色在线看 | 精品国产欧美一区二区三区不卡 | 日韩一级片大全 | 国产精品123 | 99久久影院 | 一区二区欧美在线观看 | 中文字幕色站 | 欧美精品乱码久久久久久 | 中文字幕一区二 | 日韩av电影免费在线观看 | 久久成人综合 | 99精彩视频在线观看免费 | 国产精品99在线播放 | 午夜性生活 | 亚洲国产精品一区二区久久,亚洲午夜 | 国产资源网 | 超级碰碰视频 | 国产色影院 | 亚洲一二区视频 | 午夜视频免费在线观看 | 国产精品大全 | 中文字幕大全 | 国产精品成久久久久 | 午夜视频黄| 欧美精品久久久久a | 天天操夜夜摸 | 日本韩国精品在线 | www.五月激情.com | 中文在线天堂资源 | 91香蕉视频 mp4 | 亚洲第一成网站 | 91麻豆精品国产91久久久更新时间 | 久久久久久美女 | 国产一二三四在线观看视频 | 免费观看久久久 | 国产另类av| 免费在线观看一区 | 久久夜色精品亚洲噜噜国4 午夜视频在线观看欧美 | 国产爽妇网 | 国产精品 999 | 在线观看视频一区二区三区 | 97av在线视频免费播放 | 最近久乱中文字幕 | 国产999视频在线观看 | 国产美女精品人人做人人爽 | 国产精品18久久久久久久久 | 欧美另类交在线观看 | 在线看片视频 | 91久久精品一区二区三区 | 有码中文字幕在线观看 | 99视频免费播放 | 国产一区二区在线精品 | 一区二区三区www | 婷婷视频 | 日日夜夜人人精品 | 亚洲春色综合另类校园电影 | 99精品免费观看 | 精品一区欧美 | 超碰在线最新 | 在线中文字幕电影 | 日本中文字幕在线一区 | 国产又粗又硬又爽的视频 | 日韩91在线 | 亚洲黄色免费观看 | 国产精品久久久 | 99超碰在线观看 | 西西444www大胆无视频 | 日韩精品免费一线在线观看 | 亚洲高清免费在线 | 国产精品九九久久99视频 | 91九色免费视频 | www.夜夜操.com | 日日干网 | 一区二区三区在线看 | 日本精品久久久一区二区三区 | 久久精国产 | 亚洲成人资源网 | 国产精品精品久久久久久 | 成人一区二区三区在线 | 日韩在线观看a | 欧美国产日韩中文 | 能在线看的av| 亚洲91精品在线观看 | 国产精品涩涩屋www在线观看 | 最新超碰在线 | 国产亚洲精品免费 | 国产五月婷 | 超碰资源在线 | av日韩在线网站 | 一区二区三区高清在线 | 国内精品亚洲 | www亚洲视频 | 成人免费看黄 | 欧美一级特黄aaaaaa大片在线观看 | 色偷偷中文字幕 | 成人午夜影院在线观看 | 日韩一区二区三区免费视频 | 91桃色在线播放 | 激情视频网页 | 日韩欧美一区二区三区在线观看 | av三级av| 日韩在线观看网站 | 欧美日韩国产二区 | 国产99久久久精品 | 成人黄色小说在线观看 | 91成人网在线观看 | 正在播放 国产精品 | 国产精品99久久久久久大便 | 国产小视频福利在线 | 99精品黄色 | 在线看日韩av | 国产成人精品不卡 | 丁香婷婷色 | 人人超在线公开视频 | 久久8精品| 国产精品亚洲综合久久 | 亚洲国产欧洲综合997久久, | 国产精品1000 | 欧美少妇的秘密 | 91精品一区在线观看 | 丰满少妇在线观看 | 欧美大香线蕉线伊人久久 | 天天摸天天操天天爽 | 久久久久久久免费 | 激情xxxx | 三级a毛片| 婷婷丁香在线视频 | 99夜色 | 一区二区三区四区五区在线视频 | 欧美91在线| 美女网站在线免费观看 | 国产午夜精品一区二区三区四区 | 亚洲精品午夜久久久久久久久久久 | 国产成人精品一区二区在线 | 国产91aaa| 亚洲另类视频在线 | 日韩欧美在线综合网 | a一片一级 | 91看毛片 | 国产一级黄色av | 亚洲精品在线观 | av网站在线观看免费 | 日韩久久影院 | 欧美午夜视频在线 | 五月天婷婷丁香花 | 久久久蜜桃 | 欧美巨大荫蒂茸毛毛人妖 | 好看的国产精品视频 | 欧美激情片在线观看 | 久久黄色网 | 国产视频欧美视频 | 免费视频xnxx com | 色综合久久88色综合天天免费 | 久久久久久国产一区二区三区 | 黄色一级大片免费看 | 免费无遮挡动漫网站 | 免费h精品视频在线播放 | 成人一区电影 | 免费特级黄色片 | 天天操月月操 | 亚洲欧洲xxxx | 91社区国产高清 | 高潮久久久久久久久 | av电影免费看 | 日韩专区在线播放 | 日日夜夜亚洲 | 亚洲高清久久久 | 国产二区精品 | 亚洲 综合 激情 | 中文字幕电影高清在线观看 | 91精品综合| 亚洲欧美日韩精品久久久 | 免费看的黄色片 | 婷婷精品进入 | 午夜精品久久久久久久99婷婷 | www.狠狠干 | 国产精品门事件 | 亚洲成色777777在线观看影院 | 中文字幕一区二区三区在线播放 | 91在线精品一区二区 | 免费在线一区二区三区 | 亚洲精品在线视频观看 | 中文字幕免费高清在线 | 欧美性色综合 | 在线国产能看的 | 国产成人一区二区三区电影 | 激情欧美日韩一区二区 | 国产成a人亚洲精v品在线观看 | 国产成人精品一区二区三区福利 | 国产精品扒开做爽爽的视频 | 99精品99 | 色婷婷免费视频 | 在线观看黄网站 | 欧美一级片免费观看 | 中文字幕国产一区二区 | 日韩在线国产精品 | 粉嫩av一区二区三区免费 | 色婷婷五 | 天天操人人干 | 婷婷av综合 | 久久艹人人 | 91自拍成人 | 婷婷综合激情 | 欧美一区二区三区四区夜夜大片 | 伊人久久国产 | 欧洲在线免费视频 | 久久成电影 | 在线观看黄色免费视频 | 国产日产精品久久久久快鸭 | 日韩有码在线播放 |