日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人工智能 > ChatGpt >内容正文

ChatGpt

Open AI 自监督学习笔记:Self-Supervised Learning | Tutorial | NeurIPS 2021

發布時間:2024/1/8 ChatGpt 48 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Open AI 自监督学习笔记:Self-Supervised Learning | Tutorial | NeurIPS 2021 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

轉載自微信公眾號
原文鏈接: https://mp.weixin.qq.com/s?__biz=Mzg4MjgxMjgyMg==&mid=2247486049&idx=1&sn=1d98375dcbb9d0d68e8733f2dd0a2d40&chksm=cf51b898f826318ead24e414144235cfd516af4abb71190aeca42b1082bd606df6973eb963f0#rd

Open AI 自監督學習筆記


文章目錄

    • Open AI 自監督學習筆記
      • Outline
      • Introduction
        • What is self-supervised learning?
        • What's Possible with Self-Supervised Learning?
      • Early Work
        • Early Work: Connecting the Dots
        • Restricted Boltzmann Machines
        • Autoencoder: Self-Supervised Learning for Vision in Early Days
        • Word2Vec: Self-Supervised Learning for Language
        • Autoregressive Modeling
        • Siamese Networks
        • Multiple Instance Learning & Metric Learning
      • Methods
        • Methods for Framing Self-Supervised Learning Tasks
        • Self-Prediction
        • Self-prediction: Autoregressive Generation
        • Self-Prediction: Masked Generation
        • Self-Prediction: Innate Relationship Prediction
        • Self-Prediction: Hybrid Self-Prediction Models
        • Contrastive Learning
        • Contrastive Learning: Inter-Sample Classification
          • Loss function 1: Contrastive loss
          • Loss function 2: Triplet loss
          • Loss function 3: N-pair loss
          • Loss function 4: Lifted structured loss
          • Loss function 5: Noise Contrastive Estimation (NCE)
          • Loss function 6: InfoNCE
          • Loss function 7: Soft-Nearest Neighbors Loss
        • Contrastive Learning: Feature Clustering
        • Contrastive Learning: Multiview Coding
        • Contrastive Learning between Modalities
      • Pretext tasks
        • Recap: Pretext Tasks
        • Pretext Tasks: Taxonomy
        • Image / Vision Pretext Tasks
          • Image Pretext Tasks: Varizational AutoEncoders
          • Image Pretext Tasks: Generative Adversial Networks
          • Vision Pretext Tasks: Autoregressive Image Generation
          • Vision Pretext Tasks: Diffusion Model
          • Vision Pretext Tasks: Masked Prediction
          • Vision Pretext Tasks: Colorization and More
          • Vision Pretext Tasks: Innate Relationship Prediction
          • Contrastive Predictive Coding and InfoNCE
          • Vision Pretext Tasks: Inter-Sample Classification
          • Vision Pretext Tasks: Contrastive Learning
          • Vision Pretext Tasks: Data Augmentation and Multiple Views
          • Vision Pretext Tasks: Inter-Sample Classification
            • MoCo
            • SimCLR
            • Barlow Twins
          • Vision Pretext Tasks: Non-Contrastive Siamese Networks
          • Vision Pretext Tasks: Feature Clustering with K-Means
          • Vision Pretext Tasks: Feature Clustering with Sinkhorm-Knopp
          • Vision Pretext Tasks: Feature Clustering to improve SSL
          • Vision Pretext Tasks: Nearest-Neighbor
          • Vision Pretext Tasks: Combining with Supervised Loss
        • Video Pretext Tasks
          • Video Pretext Tasks: Innate Relationship Prediction
          • Video Pretext Tasks: Optical Flow
          • Video Pretext Tasks: Sequence Ordering
          • Video Pretext Tasks: COlorization
          • Video Pretext Tasks: Contrastive Multi-View Learning
          • Video Pretext Task: Autoregressive Generation
        • Audio Pretext Tasks
          • Audio Pretext Tasks: Contrastive Learning
          • Audio Pretext Task: Masked Languagee Modeling for ASR
        • Multimodal Pretext Tasks
        • Language Pretext Tasks
          • Language Pretext Tasks: Generative Language Modeling
          • Language Pretext Tasks: Sentence Embedding
      • Training Techniques
        • Techniques: Data augmentation
          • Techniques: Data augmentation -- Image Augmentation
          • Techniques: Data augmentation -- Text Augmentation
        • Hard Negative Mining
          • What is "hard negative mining"
          • Explicit hard negative mining
          • Implicit hard negative mining
      • Theories
        • Contrastive learning captures shared information betweem views
        • The InfoMin Principle
        • Alignment and Uniformity on the Hypersphere
        • Dimensional Collapse
        • Provable Guarantees for Contrastive Learning
      • Feature directions
        • Future Directions


Video: https://www.youtube.com/watch?v=7l6fttRJzeU
Slides: https://nips.cc/media/neurips-2021/Slides/21895.pdf

Self-Supervised Learning
Self-Prediction and Contrastive Learning

  • Self-Supervised Learning
    • a popular paradigm of representation learning

Outline

  • Introduction: motivation, basic concepts, examples
  • Early Work: Look into connection with old methods
  • Methods
    • Self-prediction
    • Contrastive Learning
    • (for each subsection, present the framework and categorization)
  • Pretext tasks: a wide range of literature review
  • Techniques: improve training efficiency

Introduction

What is self-supervised learning and why we need it?

What is self-supervised learning?

  • Self-supervised learning (SSL):
    • a special type of representation learning that enables learning good data representation from unlablled dataset
  • Motivation :
    • the idea of constructing supervised learning tasks out of unsupervised datasets

    • Why?

      ? Data labeling is expensive and thus high-quality dataset is limited

      ? Learning good representation makes it easier to transfer useful information to a variety of downstream tasks ? \Rightarrow ? e.g. Few-shot learning / Zero-shot transfer to new tasks

Self-supervised learning tasks are also known as pretext tasks

What’s Possible with Self-Supervised Learning?

  • Video Colorization (Vondrick et al 2018)

    • a self-supervised learning method

    • resulting in a rich representation

    • can be used for video segmentation + unlabelled visual region tracking, without extra fine-tuning

    • just label the first frame

  • Zero-shot CLIP (Radford et al. 2021)

    • Despite of not training on supervised labels

    • Zero-shot CLIP classifier achieve great performance on challenging image-to-text classification tasks

Early Work

Precursors 先驅者 to recent self-supervised approaches

Early Work: Connecting the Dots

Some ideas:

  • Restricted Boltzmann Machines

  • Autoencoders

  • Word2Vec

  • Autogressive Modeling

  • Siamese networks

  • Multiple Instance / Metric Learning

Restricted Boltzmann Machines

  • RBM:
    • a special case of markov random field

    • consisting of visible units and hidden units

    • has connections between any pair across visible and hidden units, but not within each group

Autoencoder: Self-Supervised Learning for Vision in Early Days

  • Autoencoder: a precursor to the modren self-supervised approaches
    • Such as Denoising Autoencoder
  • Has inspired many self-learning approaches in later years
    • such as masked language model (e.g. BERT), MAE

Word2Vec: Self-Supervised Learning for Language

  • Word Embeddings to map words to vectors
    • extract the feature of words
  • idea:
    • the sum of neighboring word embedding is predictive of the word in the middle

  • An interesting phenomenon resulting from word2Vec:
    • you can observe linear substructures in the embedding space where the lines connecting comparable concepts such as the corresponding masculine and feminine words appear in roughly parallel lines

Autoregressive Modeling

  • Autoregressive model:

    • Autoregressive (AR) models are a class of time series models in which the value at a given time step is modeled as a linear function of previous values

    • NADE: Neural Autogressive Distribution Estimator

  • Autogressive model also has been a basis for many self-supervised methods such as gpt

Siamese Networks

Many contrastive self-supervised learning methods use a pair of neural networks and learned from their difference
– this idea can be tracked back to Siamese Networks

  • Self-organizing neural networks
    • where two neural networks take seperate but related parts of the input, and learns to maximize the agreement between the two outputs
  • Siamese Networks
    • if you believe that one network F can well encode x and get a good representation f(x)

    • then, 對于兩個不同的輸入x1和x2,their distance can be d(x1,x2) = L(f(x1),f(x2))

    • the idea of running two identical CNN on two different inputs and then comparing them —— a Siamese network

    • Train by:

      ? If xi and xj are the same person, ∣ ∣ f ( x i ) ? f ( x j ) ||f(xi)-f(xj) ∣∣f(xi)?f(xj) is small

      ? If xi and xj are the different person, ∣ ∣ f ( x i ) ? f ( x j ) ||f(xi)-f(xj) ∣∣f(xi)?f(xj) is large

Multiple Instance Learning & Metric Learning

Predecessors of the predetestors of the recent contrastive learning techniques : multiple instance learning and metric learning

  • deviate frome the typical framework of empirical risk minimization

    • define the objective function in terms of multiple samples from the dataset ? \Rightarrow ? multiple instance learning
  • ealy work:

    • around non-linear dimensionality reduction
    • 如multi-dimensional scaling and locally linear embedding
    • better than PCA: can preserving the local structure of data samples
  • metric learning:

    • x and y: two samples
    • A: A learnable positive semi-definite matrix
  • contrastive Loss:

    • use a spring system to decrease the distance between the same types of inputs, and increase between different type of inputs
  • Triplet loss

    • another way to obtain a learned metric
    • defined using 3 data points
    • anchor, positive and negative
    • the anchor point is learned to become similar to the positive, and dissimilar to the negative
  • N-pair loss:

    • generalized triplet loss
    • recent 對比學習 就以 N-pair loss 為原型

Methods

  • self-prediction
  • Contrastive learning

Methods for Framing Self-Supervised Learning Tasks

  • Self-prediction: Given an individual data sample, the task is to predict one part of the sample given the other part
    • 即 “Intra-sample” prediction

The part to be predicted pretends to be missing

  • Contrastive learning: Given multiple data samples, the task is to predict the relationship among them
    • relationship: can be based on inner logics within data

      ? such as different camera views of the same scene

      ? or create multiple augmented version of the same sample

The multiple samples can be selected from the dataset based on some known logics (e.g., the order of words / sentences), or fabricated by altering the original version
即 we know the true relationship between samples but pretend to not know it

Self-Prediction

  • Self-prediction construct prediction tasks within every individual data sample

    • to predict a part of the data from the rest while pretending we don’t know that part

    • The following figure: demonstrate how flexible and diverse the options we have for constructing self-prediction learning tasks

      ? can mask any dimensions

  • 分類:

    • Autoregressive generation
    • Masked generation
    • Innate relationship prediction
    • Hybrid self-prediction

Self-prediction: Autoregressive Generation

  • The autoregressive model predicts future behavior based on past behavior

    • Any data that comes with an innate sequential order can be modeled with regression
  • Examples :

    • Audio (WaveNet, WaveRNN)
    • Autoregressive language modeling (GPT, XLNet)
    • Images in raster scan (PixelCNN, PixelRNN, iGPT)

Self-Prediction: Masked Generation

  • mask a random portion of information and pretend it is missiing, irrespective of the natural sequence

    • The model learns to predict the missing portion given other unmasked information
  • e.g.,

    • predicting random words based on other words in the same context around it
  • Examples :

    • Masked language modeling (BERT)
    • Images with masked patch (denoising autoencoder, context autoencoder, colorization)

Self-Prediction: Innate Relationship Prediction

  • Some transformation (e.g., segmentation, rotation) of one data samples should maintain the original information of follow the desired innate logic

  • Examples

    • Order of image patches

      ? e.g., shuffle the patches

      ? e.g., relative position, jigsaw puzzle

    • Image rotation

    • Counting features across patches

Self-Prediction: Hybrid Self-Prediction Models

Hybrid Self-Prediction Models: Combines different type of generation modeling

  • VQ-VAE + AR
    • Jukebox (Dhariwal et al. 2020), DALL-E (Ramesh et al. 2021)
  • VQ-VAE + AR + Adversial
    • VQGAN (Esser & Rombach et al. 2021)

    • VQ-VAE: to learn a discrete code book of context rich visual parts

    • A transformer model: trained to auto-aggressively modeling the color combination of this code book

Contrastive Learning

  • Goal:

    • To learn such an embedding space in which similar sample pairs stay close to each other while dissimilar ones are far apart

  • 對比學習 can be applied to both supervised and unsupervised settings

    • when working with unsupervised data, 對比學習 is one of the most powerful approach in the self-supervised learning
  • Category

    • Inter-sample classification

      🚩 the most dominant approach

      ? “inter-smaple”: emphasize or distinguish it from “intra-sample”

    • Feature clustering

    • Multiview coding

Contrastive Learning: Inter-Sample Classification

  • Given both similar (“positive”) and dissimilar (“negative”) candidates, to identify which ones are similar to the anchor data point is a classification task

    • anchor: the original input
  • How to construct a set of data point candidates:

    • The original input and its distorted version
    • Data that captures the same target from different views
  • Common loss functions :

    • Contrastive loss, 2005
    • Triplet loss, 2015
    • Lifted structured loss, 2015
    • Multi-class n-pair loss, 2016
    • Noise contrastive estimation, 2010
    • InfoNCE, 2018
    • Soft-nearest neighbors loss, 2007, 2019
Loss function 1: Contrastive loss
  • 2005

  • Works with labelled dataset

  • Encoder data into an embedding vector

    • such that examples from the same class have similar embeddings and samples from different classes have different ones
  • Given two labeled data pairs ( x i , y i ) (x_i,y_i) (xi?,yi?) and ( x j , y j ) (x_j,y_j) (xj?,yj?):

Loss function 2: Triplet loss
  • Triplet loss (Schroff et al. 2015)

    • learns to minimize the distance between the anchor x x x and positive x + x+ x+ and
    • maximize the distance between the anchor x x x and negative x ? x- x? at the same time
  • Given a triplet input ( x , x + , x ? ) (x, x^{+}, x^{-}) (x,x+,x?)

Triplet (三胞胎) loss: because it demands an input triplet containing one anchor, one positive and one negative

Loss function 3: N-pair loss
  • N-Pair loss (Sohn 2016)
    • generalizes triplet loss to include comparison with multiple negative samples
  • Given oen positive and N-1 negative samples:
    • { x , x + , x 1 ? , . . . , x N ? 1 ? } \{x,x^{+},x_{1}^{-},...,x_{N-1}^{-}\} {x,x+,x1??,...,xN?1??}

Loss function 4: Lifted structured loss
  • Lifted structured loss (Song et al. 2015):

    • utilizes all the pairwise edges within one training batch for better computational efficiency

  • 對于大規模訓練,batchsize經常非常大

    • means we have many samples within one batch
    • can construct multiple similar or dissimilar pairs
    • Lifted structured loss: utilize all the paragraphs edges of relationship within one training batch
    • improve compute efficiency as it incorporates more information within one batch
Loss function 5: Noise Contrastive Estimation (NCE)
  • Noise contrastive Estimation (NCE): Gutmann & Hyvarinen 2010

    • runs logistic regression to tell apart the target data from noise
  • Given target sample distribution p and noise distribution q:

  • initially proposed to learn word embedding in 2010

Loss function 6: InfoNCE
  • InfoNCE (2018)

    • Uses categorical cross-entropy loss to identify the positive sample amongst a set of unrelated noise samples
  • Given a context vector c, the positive sample should be drawn from the conditional distribution ( p ( x ∣ c ) ) (p(x|c)) (p(xc))

    • while N-1 negative samples are drawn from the proposal distribution p(x), independent from the context c
  • The probability of detecting the positive sample correctly is:

Loss function 7: Soft-Nearest Neighbors Loss
  • Soft-Nearest Neighbors Loss (Frosst et al. 2019): extends the loss function to include multiple positive samples given known labels
  • Given a batch of samples { x i , y i } ∣ i = 1 B \{x_i,y_i\}|_{i=1}^B {xi?,yi?}i=1B?
    • known labels may come from supervised dataset or fabricated with data augmentation

    • temperature term: tuning how concentrated the feature space

Contrastive Learning: Feature Clustering

  • Find similar data samples by clustering them with learned features

  • core idea : Use clustering algorithms to assign pseudo lables to samples such that we can run intra-sample contrastive learning

  • Examples:

    • Deep Cluster (Caron et al 2018)

    • Inter CLR (Xie et al 2021)

Contrastive Learning: Multiview Coding

  • Apply the InfoNCE objective to two or more different views of input data

  • Became a mainstream contrastive learning method

    • AMDIM (Bachman et al 2019)
    • Contrastive multiview coding (CMC, Tian et al 2019) 等

Contrastive Learning between Modalities

  • Views can be from paired inputs from two or more modalities
    • CLIP (Radford et al 2021)、ALIGN (Jia et al 2021):enables zero-shot classification, cross-modal retrieval, guided image generation

    • CodeSearchNet (Husain et al 2019): contrast learning between text and code

Pretext tasks

Recap: Pretext Tasks

  • Step 1: Pre-train a model for a pretext task

  • Step 2: Transfer to applications

Pretext Tasks: Taxonomy

  • Generative
    • VAE
    • GAN
    • Autoregressive
    • Flow-based
    • Diffusion
  • Self-Prediction
    • Masked Prediction (Denoising AE, Context AE)
    • Channel Shuffling (colorization, split-brain)
  • Innate Relationship
    • Patch Positioning
    • Image Rotation
    • Feature Counting
    • Contrastive Predictive Coding
  • Contrastive
    • Instance Discrim

    • Augmented Views

    • Clustering-based

Image / Vision Pretext Tasks

Image Pretext Tasks: Varizational AutoEncoders
  • Auto-Encoding Variational Bayes (Kingma et al. 2014)

  • Image generation:

    • itself is an immensely broad field that deserves an entire tutorial or more
    • but can also serve as representation learning
Image Pretext Tasks: Generative Adversial Networks
  • Jointly train an encoder, additional to the usual GAN

    • Bidirectional GAN

    • Adversarially Learned Inference

  • GAN Inversion: learning encoder post-hoc and/or optimizing for given image

Vision Pretext Tasks: Autoregressive Image Generation
  • Neural autoregressive density estimation (NADE)
  • Pixel RNN, Pixel CNN
    • Use RNN and CNN to predict values conditioned on the neighboring pixels
  • Image GPT
    • Uses a transformer on discretized pixels and was able to obtain better representation than building of supervised approaches

Vision Pretext Tasks: Diffusion Model
  • Diffusion Modeling :
    • Follows a Markov chain of diffusion steps to slowly add random noise to data

    • and then learn to reverse the diffusion process to construct desired data samples from the noise

Vision Pretext Tasks: Masked Prediction
  • Denoising autoencoder (Vincent et al. 2008)

    • Add noise = Randomly mask some pixels

    • Only reconstruction loss

  • Context autoencoder (Pathak et al 2016)

    • Mask a random region in the image

    • Reconstruction loss + adversial loss

    • adversial loss: tries to make it difficult to distingusih between the painting produced by the model and the actual image

Vision Pretext Tasks: Colorization and More

can not only be on the pixel value itself, but also on any subset of information from the image

  • Image Colorization

    • Predict the binned CIE Lab color space given a grayscale image
  • Split-brain autoencoder

    • Predict a subset of color channels from the rest of channels
    • Channels: luminosit, color, depth, etc.

In order to get representation that transfer well to downstream tasks

Vision Pretext Tasks: Innate Relationship Prediction
  • Learn the relationship among image patches:
    • Predict relative positions between patches
    • Jigsaw Puzzle using patches

  • RotNet: predict which rotation is applied (Gidaris et al. 2018)
    • Rotation does not alter the semantic content of an image
  • Representation Learning by Learning to Count (Noroozi et al. 2017)
    • Counting features across patches without labels, using equivariance of counts
    • ie, learns a function that counts visual primitives in images

Contrastive Predictive Coding and InfoNCE
  • Contrastive Predictive Coding (CPC) (van den Oord et al 2018)
    • Classify the “future” representation amongst a set of unrelated “negative” samples
    • an autoregressive context predictor is used to classify the correct future patches

  • minimizing the loss function 等價于 maxmizing a lower bound to the mutual information between the predicted context c t c_t ct? and the future patch x t + k x_{t+k} xt+k?
    • 相當于預測的數據的latent representation最準確

CPC has been highly influential in contrastive learning

  • showing the effectiveness of causing the problem as an entire sample classification task
Vision Pretext Tasks: Inter-Sample Classification
  • Example CNN
  • Instance-level discrimination
    • Each istance is a distinct calss of its own

      🚩 # classes = # training samples

    • Non-parametric softmax that compares features

    • Memory bank for stroing representations of past samples V = V { i } V=V\{i\} V=V{i}

The model learns to scatter the feature vectors in the hypersphere while mapping visually similar images into closer regions

Vision Pretext Tasks: Contrastive Learning
  • Common approach:
    • Positive: make multiple views to one images and consider the image and its distorted version as similar pairs
    • Negative: different images are treated dissimilar

一個自然的問題:Is there better ways to creat multiview images? ↓ \downarrow

Vision Pretext Tasks: Data Augmentation and Multiple Views
  • Augment Multiscale Deep InfoMax
    • AMDIM, Bachman 2019
    • Views from different augmentations
    • create multiple views from one input image
  • Contrastive Multiview Coding
    • CMC
    • Multiple views from different channels or semantic segmentation labels of the image as different views from a single image
  • Pretext-Invariant Representation Learning
    • Jigsam transformation
    • (as an input transform)
Vision Pretext Tasks: Inter-Sample Classification
MoCo
  • MoCo (Momentum Contrast; He et al. 2019)

    • Memory bank is a FIFO queue now
    • The target features are encoded using a momentum encoder ? \Rightarrow ? 一個batch付出很小的代價即可獲得更多的negative samples
    • shuffling BN: 緩解BN對self-supervised learning的不利影響
      - MOCO v2:
    • MLP projection head
    • stronger data augmentation (添加了模糊)
    • Cosine learning rate schedule

  • MoCo v3:

    • Use Vision Transformer to replace ResNet
    • in-batch negatives

SimCLR
  • SimCLR (Simple framework for Contrastive Learning of visual Representation)
    • Contrastive learning loss

    • f() – base encoder

    • g() – projection head layer

    • In-batch negative samples

      ? Use large batches to have sufficient number of negative inputs

fully symmetric;

  • SimCLR v2
    • Larger ResNet models
    • Deeper g()
    • Memory bank

Barlow Twins
  • Barlow Twins (Zbontar et al. 2021)

    • Learn to make the cross-correlation matrix between two output features for two distorted version of the same sample close to the identity
    • Make it as diagonal as possible
    • because: if the individual features are efficiently encoded, they shouldn’t be encoding information that is redundant between any pairs ? \Rightarrow ? their corrleation should be zero

Vision Pretext Tasks: Non-Contrastive Siamese Networks

Learn similarity representations for different augmented views of the same sample, but no contrastive component involving negative samples

  • the objective is just minimizing the L2 distance between features encoded from the same image

  • Bootstrap Your Own Latent (BYOL, et al. )

    • Momentum-encoded features as the target
  • Simsiam (Chen 2020)

    • No momentum encoder
    • Large batch size unnecessary
  • BatchNorm seems to be playing an important role

    • might implicityly providing contrastive learning signal

Vision Pretext Tasks: Feature Clustering with K-Means

another major technology for self-supervised learning:

  • to learn from clusters of features
  • DeepCluster (Caron et al. 2018)
    • Iteratively clusters features via k-means
    • then, uses cluster assignments as pseudo lables to provide supervised signals
  • Online DeepCluster (Zhan et al. 2020)
    • Performs clustering and netwrok update simultaneously rather than alternatingly

  • Prototypical Cluster Learning (PCL, Li et al. 2020)
    • Online EM for clustering
    • combined with InfoNCE for smoothness
Vision Pretext Tasks: Feature Clustering with Sinkhorm-Knopp

Sinkhorm-Knopp: a cluster algorithm based on OT

  • SeLa (Self-Labelling, Asano et al. 2020)
  • SwAV (Swapping Assignments between multiple Views; Caron et al. 2020)
    • Implicit clustering via a learned prototype code (“anchor clusters”)
    • Predict cluster assignment in the other column

Vision Pretext Tasks: Feature Clustering to improve SSL

In this approach, nobel ideas based on clustering are designed to be used in conjunction with other SSL methods

  • InterCLR (Xie et al. 2020)
    • Inter-sample contrastive pairs are constructed according to pseudo labels obtained by clustering
    • 即讓對比學習的正樣本也可以來自不同的圖片 (而不是只能通過Multi-view) using pseudolabels from an online k-means clustering
  • Divide and Contraset (Tian et al. 2021)
    • Train expert models on the clustered datasets and then distill the experts into a single model

    • to improve the performance of other self-supervised learning models

Vision Pretext Tasks: Nearest-Neighbor
  • NNCLR (Dwibedi et al. 2021)
    • Contrast with the nearest neighbors in the embedding space

      ? to serve as the positive and negtive in contrastive learning

    • Allows for lighter data augmentation for views

Vision Pretext Tasks: Combining with Supervised Loss
  • Combine supervised loss + self-supervised learning
    • Self-supervised semi-supervised learning (S4L, Zhai et al 2019)
    • Unsupervised data augmentation (UDA, Xie et al 2019)
  • Use known labels for contrastive learning
    • Supervised Contrastive Loss (SupCon; Khosla et al. 2021)

      ? less sensitive to hyperparameter choices

Video Pretext Tasks

Video Pretext Tasks: Innate Relationship Prediction
  • Most image pretext tasks can be applied to videos
  • However, with an additional time dimension, much more information about the video shot configuration or the physical world can be extracted from videos
    • Predicting object movements
    • 3D motion of camera
Video Pretext Tasks: Optical Flow

Tracking object movement tracking in time

  • Tracking movement of image patches (Wang & Gupta, 2016)

  • Segmentation based on motion (Pathak et al. 2017)
Video Pretext Tasks: Sequence Ordering
  • Temporal order Verification

    • Misra et al. 2016

    • Fernando et al. 2017

    • 判斷順序是否正確

  • Predict the arrow of time, forward or backward

    • Wei et al. 2018
    • classify whether the sequene is moving forward or backward in time
    • outperform the temporal order verification model
Video Pretext Tasks: COlorization
  • Tracking emerges by colorizing videos (Vondrick et al. 2018)

    • Copy colors from a reference frame to another target frame in grayscale

    • by leverage the natural temporal coherence of colors across video frames

  • Tracking emerges by colorizing videos (Vondrick et al. 2018)

    • Used for video segmentation or human pose estimation without fine-tuning

      ? because the model can move the colored markings in the labeled input image directly in the prediction

Video Pretext Tasks: Contrastive Multi-View Learning
  • TCN (Sermanet et al. 2017)

    • Use triplet loss

    • Different viewpoints at the same timestep of the same scene should share the same embedding, while embedding should vary in time, even of the same camera viewpoint

  • Multi-frame TCN (Dwibedi et al. 2019)

    • Use n-pairs loss
    • Multiple frames are aggregated into the embedding
Video Pretext Task: Autoregressive Generation

Because video files are huge, generating coherent continuous of video has been a difficult task

  • Predicting videos with VQ-VAE (Walker et al. 2021)

    • first: learning to discretized the video into latent codes using VQ-VAE

    • then: learning to auto regressively generate the frames using pixel cnn or transformers

    • Combining VQ-VAE and autogressively models to generate high dimensional data ? \Rightarrow ? is a very powerful generating model

  • VideoGPT: Video generation using VQ-VAE and Transformers (Yan et al. 2021)

  • Jukebox (Dhariwal et al. 2020)

    • learning 3 different level of VQ-VAE using 3 different compression ratio
    • resulting 3 sequence of discrete code
    • then use them to generate new music

  • CALM (Castellon et al. 2021)
    • Jukebox representation for MIR tasks
  • TagBox (Manilow et al. 2021)
    • Source separation by steering Jukebox’ latent space

Audio Pretext Tasks

Audio Pretext Tasks: Contrastive Learning
  • COLA (Saeed et al. 2021)
    • Assigns high similarity between audio clip extracted from the same recording and low similarity to clips from different recordings
    • predicts a pari of encoded features are from the same recording or not
  • Multi-Format audio contrastive learning
    • assigns high similarity between the raw audio format and the corresponding spectral representation

    • maximizing agreement between between features included from the raw waveform and he spectrogram formats

Audio Pretext Task: Masked Languagee Modeling for ASR

ASR: Automatic speech recognition

  • Wav2Vec 2.0 (Baevski et al. 2020)

    • applies contrast siblings on the representation of mask portion of the audio

      ? to learn discrete tokens from them

    • speech recognition models: trained on these token, show better performance compared to those trained on conventional audio features / raw audio

  • HuBERT (Hsu et al. 2021, FAIR)

    • learned by alternating between an offline cadence clustering step and optimizing for cluster assignment prediction (similar to deep cluster)
  • Also employed by SpeechStew (Chan et al. 2021), Big SSL (Zhang et al. 2021)

Multimodal Pretext Tasks

applied to multimodal data, although the difinition of self-supervised learning gets kind of blurry here depending on whether you consider a multi-modal dataset as single unlabeled dataset or as if one modality gives supervision to another modality

  • MIL-NCE (Miech et al. 2020)

    • Find matching narration with video

    • trained constrastively to find matching narration with video, which can not only use for correcting misalignment in videos but also for action recognition, text to video retrieval, action localization and action segmentation

  • CLIP (Radford et al. ), ALIGN (Jia et al. 2021)

    • Contrast text and image embeddings from paired data

Language Pretext Tasks

Language Pretext Tasks: Generative Language Modeling
  • Pretrained language models:

    • They all rely on unsupervised text and try to predict one sentence from the context
    • only depend on the natural order of words and sequences
  • Some examples: changed the landscape of NLP research quite a lot

    • GPT

      ? Autogressive;

      ? predict the next token based on the previous tokens

    • BERT

      ? as a bi-directional transformer model

      ? Masked language modeling (MLM)

      ? Next sentence prediction (NSP) ? \Rightarrow ? a binary classifier for telling whether one sentence is the next sentence of the other

    • ALBERT

      ? Sentence order prediction (SOP) ? \Rightarrow ? Positive sample: a pair of two consecutive segments from the same document; Negative sample: same as above but with the segment order switch

    • ELECTRA

      ? Replaced token detection (RTD) ? \Rightarrow ? random tokens are replaced and considered corrected, in parallel a binary discriminator is trained together with the generative model to predict whether each token has been replaced

Language Pretext Tasks: Sentence Embedding
  • Skip-thought vectors (Kiros et al. 2015)

    • Predict sentences based on other sentences around
  • Quick-thought vectors (Logeswaran & Lee, 2018)

    • Identify the correct context sentence among other contrastive sentences

  • IS-BERT (“Info-Sentence BERT”; Zhang et al. 2020)

    • matual information maximization
  • SimCSE (“Simple Contrastive learning of Sentence Embeddings”; Gao et al. 2021)

    • Predict a sentence from itself with only dropout noise
    • One sentence gets two different versions of dropout augmentations

  • Most of the models for learning sentence embedding relies on supervised NLI (Natural Language Inference) datasets, such as SBERT (Reimers & Gurevych 2019), BERT-flow
  • Unsupervised sentence embedding models (e.g., unsupervised SimCSE) still have performance gap with the supervised version (e.g., supervised SimCSE)

Training Techniques

  • Data augmentation
  • In-batch negatives samples
  • Hard negative mining
  • Memory bank
  • Large batchsize

contrastive learning can provide good results in terms of transfer performance

Techniques: Data augmentation

  • Data augmentation setup is critical for learning good embedding

    • and generalizable embdding features
  • 方法:

    • Introduces the non-essential variations into examples without modifying semantic meanings
    • ? \Rightarrow ? thus encourages the model to learn the essential part within the representation

image augmentation; text augmentation

Techniques: Data augmentation – Image Augmentation
  • Basic Image Augmentation:

    • Random crop
    • color distortion
    • Gaussian blur
    • color jittering
    • random flip / rotation
    • etc.
  • Augmentation Strategies

    • AutoAugment (Cubuk, et al. 2018): Inspired by NAS
    • RandAugment (Cubuk et al. 2019): reduces NAS search space in AutoAugment
    • PBA (Population based augmentation; Ho et al. 2019): evolutionary algorithms
    • UDA (Unsupervised Data Augmentation ,Xie et al. 2019): select augmentation strategy to minimize the KL divergencec between the predicted distribution over an unlabelled example and its unlabelled augmented version
  • Image mixture

    • Mixup (Zhang et al. 2018): weighted pixel-wise combination of two images

      ? to create new sampls based on existed ones

    • Cutmix (Yun et al 2019): mix in a local region of one image into the other

    • MoCHi (Mixing of Contrastive Hard Negatives): mixture of hard negative samples

      ? explicitly maintains a queue of some number of negative samples sorted by similarity to the query in descending order ? \Rightarrow ? the first couple samples in the queue should be the hardest and negative samples ? \Rightarrow ? then new hard negative can be created by mixing images in this queue together or even with the query

Techniques: Data augmentation – Text Augmentation
  • Lexical (詞匯的) Edits.

    • (Just changing the words or tokens)

    • EDA (Easy Data Augmentation; Wei & Zhou 2019): Synonym replacement, random insertion / swap / deletion

    • Contextual Augmentation (Kobayashi 2018): word substition by BERT prediction

      ? try to find the replacement words using a bi-directional language model

  • Back-translation (Sennrich et al. 2015)

    • augments by first translating it to another language and then translating it back to the original language

      ? depends on the translation model ? \Rightarrow ? the meaning should stay largely unchanged

    • CERT (Fang et al. 2020) generates augmented sentences via back-translation

  • Dropout and Cutoff

    • SimCSE uses dropout (Gao et al. 2021)

      ? drouput: a universal way to apply transformnation on any input

      ? SimCSE: use drouput to creat different copies of the same text ? \Rightarrow ? universial because it doe not need expert knowledege about the attributes of this input modality (it is changes on the architecture level)

    • Cutoff augmentation for text (Shen et al. 2020)

      ? masking random selected tokens, feature columns, spans

Hard Negative Mining

What is “hard negative mining”
  • Hard negative samples are different to learn
    • They should have different labels from the anchor samples
    • But the embedding features may be very close
  • Hard negative mining is important for contrastive learning
  • Challenging negative samples encourages the model to learn better representations that can distinguish hard negatives from true positives
Explicit hard negative mining
  • Extract task-specific hard negative samples from labelled datasets
    • e.g., “contradiction” sentence pairs from NLI datasets.
    • (Most sentence embedding papers)
  • Keyword based retrieval
    • can be found by classic information retrieval models (Such as BM25)
  • Upweight the negative sample probability to be proportional to its similarity to the anchor sample
  • MoCHi: mine hard negative by sorting them according to similarity to the query in descending order
Implicit hard negative mining
  • In-batch negative samples
  • Memory bank (Wu et al. 2018, He et al. 2019)
    • Increase batch size
  • Large batch size via various training parallelism

Need large batchsize

Theories

Why does contrastive learning work?

Contrastive learning captures shared information betweem views

  • InfoNCE (van den Oord et al. 2018)

    • is a lower bound to MI (Mutual information) between views:

  • Minimizing InfoNCE leads to maximizing the MI between view1 and view2

    • 因此,minimizing the inforNCE loss ? \Rightarrow ? the encoder are optimizing the embedding space to retain as much information as possible that exsited between the two views
    • The info max principle in contrastive learning

  • Q: How can we design good views?

    • augmentations are crucial for the performance

The InfoMin Principle

  • Optimal views are at the sweet spot where it only encodes useful informnation for transfer
    • Minimal sufficient encoder depends on downstream tasks (Tian et al. 2020)

    • Composite loss for finding the sweet spot (Tsai et al. 2020)

      ? helps converging to a minimal sufficient encoder

To perform well in transfer learning ? \Rightarrow ? we want our model to capture the mutual information between the data x and the downstream label y I ( x ; y ) I(x;y) I(x;y)

  • if the mutual information between the views ( I ( v 1 ; v 2 ) I(v_1; v_2) I(v1?;v2?)) is smaller than I ( x ; y ) I(x;y) I(x;y) ? \Rightarrow ? the model would fail to capture useful information for the downstream tasks
  • Meanwhile, if the mutual information between the views are too large ? \Rightarrow ? would have excess information that is unrelated to the downstream tasks ? \Rightarrow ? the transfor performance would decrease due to the noise
  • ? \Rightarrow ? there is a sweet spot ? \Rightarrow ? the minimal sufficient encoder
  • This shows:
    • The optimal views are dependent on the downstream tasks

Alignment and Uniformity on the Hypersphere

  • Contrastively learned features are more uniform and aligned

    • Uniform : features should be distributed uniformly on the hypershere S d S^d Sd
    • Aligned : features from two views of the same input should be the same

  • compared with random initialized network or a network trained with the supervised learning
  • also measured the alignment measuring how close the distance between features from two views of the same input is

Dimensional Collapse

  • Contrastive methods sometimes suffer from dimensional collapse (Hua et al. 2021)
    • Features span lower-dimensional subspace instead
    • (Learned features span lower dimensional subspace instead of using the full dimensionality)
  • Two causes demonstrated by Jing et al (2021)
    • 1 Strong augmentation while creating the views
    • 2 implicit regularization caused by the gradient decent dynamics

Provable Guarantees for Contrastive Learning

  • Sampling complexity decreases when:
    • Adopting contrastive learning objectives (Arora et al. 2019)
    • Predicting the known distribution in teh data (Lee et al. 2020)
  • Linear classifier on learned representation is nearly optimal (Tosh et al. 2021)
  • Spectral Contrastive Learning (HaoChen et al. 2021)
    • based on a spectral decomposition of the augmentation graph

總之,對比學習理論起到了很大作用,但仍有很長的路要走

Feature directions

briefly discuss a few open research questions and areas of work to look into

Future Directions

  • Large batch size ? \Rightarrow ? improved transfer performance

  • High-quality large data corpus ? \Rightarrow ? better performance

    • Learning from synthetic or Web data
    • Measuring dataset quality and filtering / active learning ? \Rightarrow ? better control over data quality
  • Efficient negative sample selection

    • to do hard negative mining
    • (lage batchsize is not enough because batchsize cannot go to infinity)
  • Combine multiple pretext tasks

    • How to combine
    • Best strategies

  • Data augmentation tricks have critical impacts but are still quite ad-hoc

    • Modality-dependent: 大多數增強方法僅適用于單個modality ? \Rightarrow ? most of them are handcrafted by human

    • Theoretical foundations

      ? e.g., on why certain augmentation works better than others

      ? to guide us to find more efficient data augmentation

  • Improving training efficiency

    • Self-supervised learning methods are pushing the deep learning arms race (軍備競賽)

      ? increase of model size and training batch size

      ? ? \Rightarrow ? leads to increase the cost both economically and environmentally

    • Direct impacts on the economical and environmental costs

  • Social biases in the embedding space

    • Early work in debiasing word embedding
    • Biases in Dataset

總結

以上是生活随笔為你收集整理的Open AI 自监督学习笔记:Self-Supervised Learning | Tutorial | NeurIPS 2021的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

不卡国产在线 | 97在线观看免费视频 | 三级av网 | 91探花国产综合在线精品 | av性在线 | 亚洲精品国产综合久久 | 一级性生活片 | 综合铜03 | 国产黄a三级三级三级三级三级 | 国产视频日韩 | 欧美一区二区三区在线播放 | 久久亚洲精品电影 | 日本中文字幕视频 | 免费欧美 | 中文字幕乱码亚洲精品一区 | 亚洲国产片 | 丁香六月婷婷 | 在线观看涩涩 | 成人午夜电影在线播放 | 亚洲闷骚少妇在线观看网站 | 欧美aa一级片 | 在线观看中文字幕 | 精产嫩模国品一二三区 | 一级特黄aaa大片在线观看 | 尤物九九久久国产精品的分类 | 久久婷婷网 | 丁香婷婷激情啪啪 | 91在线看| 成人av在线一区二区 | 狠狠的干狠狠的操 | 超碰在线97国产 | 一级一片免费看 | 中文字幕在线播放一区二区 | 六月色播 | 手机在线看片日韩 | 三级在线视频播放 | 中文在线最新版天堂 | 91丨九色丨丝袜 | 日韩三级视频在线看 | 欧美日韩视频在线播放 | 久久精品一二三区 | 国产精品免费在线 | 亚洲黄色小说网 | 欧美一级片免费在线观看 | 国产欧美在线一区二区三区 | 日免费视频 | 色婷婷亚洲 | 99久久久久久 | 天天天天天天天天操 | 九九免费观看视频 | 亚洲综合在线视频 | 日本高清中文字幕有码在线 | 叶爱av在线 | 免费视频91| 91麻豆精品| 国产一级片久久 | 午夜精品福利一区二区三区蜜桃 | 一区二区在线影院 | 国产区第一页 | 久草网视频 | 国产色婷婷精品综合在线手机播放 | 久久不射网站 | 欧美日韩国产一区二区三区在线观看 | 久久久国产精品网站 | 欧美一级视频在线观看 | 在线观看第一页 | 91在线视频导航 | 久久久久久欧美二区电影网 | 国产精品一区二区吃奶在线观看 | 97视频网址| av电影免费在线看 | 日韩欧美电影在线观看 | 天天射天天操天天色 | 欧美一级专区免费大片 | 国产韩国日本高清视频 | 久青草影院 | 国产精品夜夜夜一区二区三区尤 | 99这里有精品 | 亚洲永久精品在线观看 | 青春草免费视频 | 97色资源 | 中文字幕高清视频 | av中文天堂在线 | 伊人五月天综合 | 日日夜夜噜噜噜 | 亚洲综合欧美精品电影 | 日韩欧美精品免费 | 最新色视频 | 国产91亚洲| 久久免费精品一区二区三区 | 免费看黄电影 | 91九色最新 | 一区二区三区免费看 | 午夜三级福利 | 天天爱天天操 | 欧美一级免费黄色片 | 啪一啪在线 | 国产精品丝袜久久久久久久不卡 | 婷婷丁香在线视频 | 日韩黄色中文字幕 | 久久露脸国产精品 | 91精品国产乱码在线观看 | 国产淫a| 91一区啪爱嗯打偷拍欧美 | 国产高清av免费在线观看 | 日韩一二三区不卡 | 国产亚洲精品久久 | 日韩在线免费 | 色综合狠狠干 | 久久久久国产精品午夜一区 | 欧美 国产 视频 | 天天爱天天射天天干天天 | 大胆欧美gogo免费视频一二区 | 国产一级a毛片视频爆浆 | 婷婷在线网站 | 国产美女精品人人做人人爽 | 激情中文在线 | 色www精品视频在线观看 | 在线免费观看视频一区 | 成人av网址大全 | 黄色aaa毛片 | 欧美色综合天天久久综合精品 | 色婷婷综合久久久中文字幕 | 久产久精国产品 | www.亚洲精品视频 | 国产一区二区精品久久 | 国产99久久九九精品免费 | 午夜影院一级片 | 国产精品视屏 | 日本aaaa级毛片在线看 | 成人免费在线视频 | 久草在线免费电影 | 这里有精品在线视频 | 久久国产一区 | 超碰在线9 | 在线免费黄色av | 美女视频永久黄网站免费观看国产 | 狠狠色伊人亚洲综合网站色 | 国产精品久久久久av福利动漫 | 久久黄色a级片 | 97成人在线观看 | 久久人人97超碰精品888 | 欧美国产日韩一区二区三区 | www.久久成人 | 久久久久久久久久久久久久av | 五月婷婷天堂 | 日韩a在线观看 | 最近高清中文字幕 | 一二三区在线 | 久久99精品久久久久久秒播蜜臀 | 欧美午夜性生活 | 亚洲成人免费在线观看 | 日韩有码专区 | 最新久久久 | 日日夜夜狠狠 | 久久久久欧美精品999 | 81国产精品久久久久久久久久 | 久久久久成人精品免费播放动漫 | 97视频在线免费播放 | 国产在线综合视频 | 亚洲国产理论片 | 国产一区二区三区网站 | 免费91麻豆精品国产自产在线观看 | 91av看片| 黄色小说18 | 亚洲理论电影网 | 国产日韩欧美在线播放 | 88av网站| 国产精品美乳一区二区免费 | 国产精品伦一区二区三区视频 | av官网在线 | 欧美ⅹxxxxxx | 成人在线视频你懂的 | 欧美在线观看禁18 | 精品国产a | 久久精品系列 | 五月婷久久 | 狠狠狠色丁香综合久久天下网 | 一区二区精品久久 | 久久久久国产一区二区 | 亚洲精品国产综合久久 | 亚洲精品在线观看免费 | 热99久久精品| 91久久国产综合精品女同国语 | 国产黄色理论片 | 天天操夜夜曰 | 丁香婷婷激情啪啪 | www在线观看视频 | 日韩av进入 | 久久一区二区免费视频 | 免费av视屏 | 欧美日韩性 | 久久这里有精品 | 亚洲91精品在线观看 | 中文在线天堂资源 | 国产一级视频在线观看 | 麻豆传媒视频在线 | 91在线免费观看网站 | 亚洲视频免费视频 | 久久午夜免费观看 | 亚洲日韩精品欧美一区二区 | 热久久影视 | 欧美一区视频 | 成年人免费在线播放 | 精品国产1区2区3区 国产欧美精品在线观看 | 狠狠狠干狠狠 | 免费影视大全推荐 | 成 人 黄 色 免费播放 | 中文不卡视频在线 | 又粗又长又大又爽又黄少妇毛片 | 在线观看亚洲精品视频 | 久久免费播放视频 | 国产一区欧美一区 | 香蕉视频久久久 | 91丨九色丨国产在线观看 | 日韩av一区二区在线播放 | 国产女人18毛片水真多18精品 | 国产视频不卡一区 | 国产精品乱码一区二区视频 | 久久免费视频99 | 爱情影院aqdy鲁丝片二区 | 久久久久久久久久毛片 | 国产免费一区二区三区最新6 | 激情综合婷婷 | av网站地址 | 高清av中文在线字幕观看1 | 成人免费一级 | 日本激情视频中文字幕 | 在线免费黄色片 | av三区在线 | 日韩欧美视频一区二区 | av在线免费网站 | 久久久色| 干亚洲少妇 | 人人看看人人 | 国产精品手机播放 | 日韩 在线a| 超薄丝袜一二三区 | 国产精品亚洲片在线播放 | 九色精品在线 | 国产一区二区观看 | 91av电影在线观看 | 韩国av一区二区三区 | 久久久国产精品人人片99精片欧美一 | 亚洲人人爱 | 国产伦精品一区二区三区在线 | 热久久视久久精品18亚洲精品 | 亚洲精品久久久蜜桃 | 久久综合9988久久爱 | 国产成人一级电影 | 天天色天天爱天天射综合 | 国产精品女同一区二区三区久久夜 | 亚洲国产成人在线 | 国产黄色精品视频 | 免费看av片网站 | 99精品观看 | 国产精品久久久久一区二区三区 | 精品在线亚洲视频 | 国产精品精 | 中文在线免费观看 | 麻豆视频大全 | 在线v| 免费亚洲一区二区 | 99在线国产 | 91麻豆操| 手机看片1042 | 国产精品久久久久av免费 | 欧美激情h| 午夜影院一区 | 国产精品久久久久久久久久直播 | 91免费版成人 | 福利电影久久 | 中文字幕久久精品 | 国产成人一区二区三区在线观看 | 日韩羞羞 | 一区二区三区免费在线观看 | 免费h精品视频在线播放 | 国产系列精品av | 777xxx欧美 | 精品国精品自拍自在线 | 在线视频欧美精品 | 久久久在线观看 | 97碰在线| 精品国产电影一区 | 天天爽夜夜爽人人爽曰av | 最近日本中文字幕a | 日韩中文字幕免费视频 | 久久精品电影院 | 欧美综合在线视频 | 欧美韩国日本在线 | 欧美精品在线免费 | 免费大片av| 欧美日韩裸体免费视频 | 在线欧美中文字幕 | 欧美一区二区三区在线 | 亚洲成人午夜av | 成人黄色大片网站 | 久久久久久久久久久免费av | av福利在线免费观看 | 国产精品激情在线观看 | 97狠狠操 | 99色 | 91网免费看 | 天天干天天草天天爽 | 亚洲国产精品成人女人久久 | 天天做天天爱天天综合网 | 亚洲综合在线播放 | 亚洲一区二区三区91 | 婷婷 中文字幕 | 中文字幕成人网 | 久草免费在线观看视频 | 久久免费看视频 | 久热色超碰 | 亚洲午夜久久久久久久久 | 久久6精品| 在线亚洲欧美日韩 | 天天干天天在线 | 97国产精品一区二区 | 97超碰资源 | 91完整版在线观看 | 91av视频在线播放 | 日韩av快播电影网 | 天天色天天操天天爽 | 97精品超碰一区二区三区 | 久久99久久99精品 | 久草久草在线观看 | 奇米影音四色 | 麻豆久久精品 | 免费国产视频 | 可以免费观看的av片 | 丁香五婷 | 国产精品免费看 | 青草视频在线看 | 精品一二区 | 激情影音| 国产剧情一区 | 久久躁日日躁aaaaxxxx | 天天爽夜夜爽人人爽一区二区 | 亚洲精品在线一区二区 | 丁香花在线视频观看免费 | 国产精品久久久久久麻豆一区 | 久久精品看 | 在线观看免费黄视频 | 狠狠色伊人亚洲综合网站野外 | 91福利小视频 | 91桃色免费视频 | 久久精品男人的天堂 | 精品在线观看一区二区 | 国产成人一区二区三区影院在线 | 色婷婷激情四射 | 成人免费在线播放 | 天天操天天射天天爱 | 天堂av观看 | av中文字幕网址 | 中文字幕精品一区久久久久 | 亚洲五月激情 | 黄色影院在线免费观看 | 美女免费视频观看网站 | 激情av综合| 丁香花在线视频观看免费 | 最近中文字幕mv | 久久国产高清视频 | 免费精品在线视频 | www.婷婷色 | 中文字幕丝袜一区二区 | 一级片视频在线 | 日本久久影视 | 欧美日韩网站 | 久久夜夜夜 | 国产高清不卡av | 最近乱久中文字幕 | 九九免费在线视频 | 久久综合色播五月 | 成人va天堂 | 久久精品国产v日韩v亚洲 | 免费裸体视频网 | 免费人做人爱www的视 | 五月天天在线 | 欧美乱码精品一区二区 | 亚洲成人影音 | 婷婷丁香激情 | 韩国精品一区二区三区六区色诱 | 久久99精品久久久久久三级 | 成人激情开心网 | 黄色av网站在线观看免费 | 日韩在线视频一区 | 又爽又黄又无遮挡网站动态图 | 激情av一区二区 | 中文字幕欧美日韩va免费视频 | 黄色av免费电影 | 日韩精品一区二区三区在线播放 | 色是在线视频 | 中文字幕视频免费观看 | 午夜精品一区二区三区在线视频 | 五月婷婷天堂 | 国产又黄又爽又猛视频日本 | 日韩一级理论片 | 亚洲视频播放 | 国产一区二区在线精品 | 99视频偷窥在线精品国自产拍 | 天天操夜夜操国产精品 | 久久久亚洲麻豆日韩精品一区三区 | av电影在线免费观看 | 国产精品一区二区久久精品 | 日韩高清一区 | 黄色av播放| 亚洲天堂社区 | 国内精品国产三级国产aⅴ久 | 久久97超碰 | 香蕉视频国产在线 | 国产一线天在线观看 | 91精品视频观看 | 在线免费观看黄色 | 美女禁18| 久久精品一区二区三 | 99视频在线免费看 | 最新日韩视频 | 婷婷丁香六月 | 99精彩视频在线观看免费 | 日韩高清黄色 | 国产日产精品一区二区三区四区 | 免费在线观看午夜视频 | 91在线免费播放视频 | 婷婷色网视频在线播放 | 午夜精品av在线 | 国产精品综合久久久久 | 欧美日韩激情视频8区 | 国产美女久久久 | 97超碰在线免费观看 | 婷婷色资源 | 成人黄色电影在线播放 | 国产精品av免费在线观看 | 国产一级一片免费播放放 | 狠狠色丁香婷婷综合久小说久 | 久久综合偷偷噜噜噜色 | 黄色一级大片在线免费看国产一 | 国产视频在线免费 | 天堂网一区二区三区 | 国产在线播放一区二区三区 | 911国产精品| 欧美男女爱爱视频 | 婷香五月 | 亚洲一区精品二人人爽久久 | 久久综合激情 | 国产综合久久 | www.夜夜夜 | 又黄又爽又刺激的视频 | 狠狠色丁香九九婷婷综合五月 | 77国产精品 | a久久久久 | 98久久| 亚洲精品免费在线观看视频 | 国产精品露脸在线 | 在线看欧美 | 日本最新中文字幕 | 在线一区电影 | 国产午夜麻豆影院在线观看 | 毛片二区 | 久久一区二区三区日韩 | 天天综合导航 | 中文超碰字幕 | 在线视频日韩 | 一区二区中文字幕在线 | 国产xx视频| 日韩理论片中文字幕 | 超碰在线成人 | 色综合久久天天 | 国产精品成人免费 | 免费三级网 | 色婷婷骚婷婷 | 91精品在线播放 | 亚洲国产精品小视频 | 欧美精品二区 | 久久久久久久久久毛片 | 中文字幕在线字幕中文 | 91视频在线国产 | 欧美日韩国产一区二区三区在线观看 | 91传媒视频在线观看 | 天天看天天干 | 国产麻豆精品一区 | 五月激情婷婷丁香 | 精品9999 | 夜夜操狠狠操 | 日日操网 | 在线观看视频在线 | 国产91对白在线播 | 国产精品久久久久久999 | 在线免费观看黄网站 | 国产精品成人一区 | 天天做日日做天天爽视频免费 | 亚洲电影在线看 | 久久综合色8888 | 六月激情网 | 久久国产电影 | 欧美一级黄大片 | 欧美一区二区三区在线看 | 国产精品自产拍在线观看 | 亚洲综合欧美激情 | 三级av网 | 97av影院 | 精品久久九九 | 精品久久久久免费极品大片 | 黄色成人av | 麻豆视频在线免费看 | 欧美日韩性视频 | 999久久久久久久久 69av视频在线观看 | 亚洲va天堂va欧美ⅴa在线 | 精品91在线| 久久综合网色—综合色88 | 国产精品女同一区二区三区久久夜 | 国产高清视频色在线www | 在线视频你懂 | 亚洲韩国一区二区三区 | 亚洲一二视频 | 国产精品美女在线观看 | 婷婷综合导航 | av蜜桃在线 | 国产精品久久久久久久久久久久午夜 | 高清不卡一区二区三区 | 欧美日韩三级在线观看 | 欧美精选一区二区三区 | 日韩高清一 | 四季av综合网站 | 天天天干夜夜夜操 | 黄色亚洲| 国内精品久久久久影院男同志 | 欧美成年黄网站色视频 | 国产成人61精品免费看片 | 久久精品国产精品 | 91免费国产在线观看 | 亚洲欧美日韩国产精品一区午夜 | 91成人午夜| 欧美一级片免费 | 欧美国产在线看 | 91麻豆网站 | 九九视频这里只有精品 | 91视频在线看 | japanesexxxxfreehd乱熟 | 五月天激情视频在线观看 | 国产精品久久久久久五月尺 | 国产污视频在线观看 | 美女av免费看| 免费看黄色大全 | 69人人| 麻豆视频一区 | 国产精品久久久久久久久久99 | 中文字幕在线看视频国产中文版 | 久久精品综合 | 国产又粗又硬又爽视频 | 天天操天天射天天爱 | 91在线日韩 | 国产福利专区 | 国产精品久久久久aaaa九色 | 亚洲精品国偷拍自产在线观看蜜桃 | 亚洲精品1区2区3区 超碰成人网 | 日韩福利在线观看 | 亚洲1级片 | 久草 | 日日摸日日添日日躁av | 青草视频在线 | 日韩精品免费一区二区三区 | 亚洲成人网av | 亚洲手机av | 不卡av电影在线 | 天天爽人人爽夜夜爽 | 超碰在线最新 | 怡红院久久 | 国产精品午夜av | 久久久久免费精品国产小说色大师 | 亚洲资源视频 | 91在线porny国产在线看 | av免费看网站 | 日韩欧美综合视频 | 欧美ⅹxxxxxx | 国产在线观看免费 | 日韩久久精品一区二区 | 在线观看岛国av | 在线观看亚洲精品视频 | 成年人毛片在线观看 | 欧美日韩性视频 | 日韩成人邪恶影片 | 怡红院av久久久久久久 | 日日狠狠 | 亚洲欧美日韩精品久久奇米一区 | 日韩二区精品 | 亚洲国产成人在线 | 久久久久国产a免费观看rela | 日韩一级网站 | 久久欧美视频 | 国产字幕av| 麻豆视频观看 | 成人午夜网 | 麻豆国产露脸在线观看 | 日韩在线网址 | 日韩精品无码一区二区三区 | 日韩中文在线播放 | 日韩二级毛片 | 国产一级在线观看视频 | 亚洲精品综合在线 | 久久66热这里只有精品 | 国产99久久久国产精品 | 日韩.com| 日日夜夜免费精品 | 伊人伊成久久人综合网小说 | 国产福利一区在线观看 | av成人免费在线观看 | av不卡中文字幕 | 婷婷在线免费 | 狠狠的操狠狠的干 | 国产色一区 | 一级黄色片在线播放 | 免费在线观看一区二区三区 | 国产精品va在线观看入 | 91av视频在线播放 | a黄色片| 欧美日韩国产色综合一二三四 | 日日夜夜草| 久久综合操 | 麻豆视频免费网站 | 日b黄色片 | h网站免费在线观看 | 日本一区二区不卡高清 | 97精品国产97久久久久久久久久久久 | 久久久综合九色合综国产精品 | 一区二区丝袜 | 日韩有码中文字幕在线 | 亚洲一区网站 | 精品国产一区在线观看 | 久久精品8| 激情小说网站亚洲综合网 | 三级av免费| 亚洲伦理电影在线 | 欧美一区在线看 | 国产精品欧美久久久久久 | 国产一区在线观看视频 | 九九久久电影 | 九七在线视频 | 国内精品国产三级国产aⅴ久 | 亚洲精品在线免费 | 成人av影视 | 99免在线观看免费视频高清 | 精品久久久精品 | 91黄色小网站 | 国产成人精品女人久久久 | 久久不卡av | 黄网站www | 日韩在线首页 | 精品视频专区 | 91久久久久久久一区二区 | 久久99精品久久只有精品 | 欧美一级视频在线观看 | 丁香六月婷婷综合 | 国产精品欧美一区二区 | 在线观看视频你懂得 | 免费观看成人 | 精品国产一区二区三区在线观看 | 久久五月天色综合 | 亚洲禁18久人片 | 亚洲日日日 | ,久久福利影视 | 亚洲综合精品在线 | 日韩三级精品 | 日韩欧美高清不卡 | 久久躁日日躁aaaaxxxx | 色资源中文字幕 | 日韩精品一区二区不卡 | 91在线免费观看网站 | 亚洲精品理论片 | 91麻豆精品国产午夜天堂 | 亚色视频在线观看 | 国产亚洲精品v | 亚洲第一av在线播放 | 国产精品美女久久久久久久久久久 | 99在线观看免费视频精品观看 | 亚洲一级片在线看 | 日韩一区在线播放 | 在线影视 一区 二区 三区 | 日本韩国欧美在线观看 | 国产高清久久久久 | 在线成人性视频 | 日日夜夜免费精品视频 | 亚洲精品国产精品久久99热 | 色国产视频 | 丝袜美腿在线 | 精品九九九 | 婷婷色资源| 免费看黄的视频 | 亚洲精品国产精品国自 | 香蕉视频久久久 | 啪啪小视频网站 | 国产精品人成电影在线观看 | 国产精品一区二区白浆 | 天天射成人 | 天天天天天天操 | 久久视频一区 | 黄色网大全 | 黄色三级视频片 | 91在线公开视频 | 久久久久久久久久电影 | 久久天堂网站 | 奇米四色影狠狠爱7777 | 亚洲欧洲成人精品av97 | 国产黄色精品网站 | 国产精品视频在线观看 | 国产精品久久久久一区二区 | 91久久黄色| 亚洲成人免费在线观看 | 久久综合国产伦精品免费 | 国产精品一区二区中文字幕 | 麻豆视频免费入口 | 精品在线观看国产 | 国产又黄又爽又猛视频日本 | 久久综合婷婷综合 | 欧美a在线免费观看 | 午夜精品久久久久久久久久 | 看黄色91 | bbbbb女女女女女bbbbb国产 | 亚洲每日更新 | 97人人艹 | 激情五月播播久久久精品 | 欧美日韩精品在线免费观看 | 日韩精品视频在线免费观看 | 国产福利在线 | 91桃花视频| 亚洲免费婷婷 | 午夜国产影院 | 国产一区二区精品在线 | 久久高清视频免费 | 怡红院成人在线 | 深爱激情五月网 | 国产一区在线视频播放 | 国产成人精品亚洲a | 日韩亚洲精品电影 | 网站免费黄 | 国产成人久久 | 亚洲最新av网站 | 久久免费视频6 | 激情在线五月天 | 欧美精品一区二区在线观看 | 97超碰国产精品女人人人爽 | 午夜私人影院 | av在线不卡观看 | 成年人在线免费看视频 | 欧美一级片播放 | 麻豆精品国产传媒 | 91精彩视频在线观看 | 欧美在线99 | 亚洲综合欧美日韩狠狠色 | 精品在线视频观看 | 久久在线 | 亚洲 欧洲av | 98久久 | 鲁一鲁影院 | 一区二区中文字幕在线观看 | 久久福利在线 | 欧美aaa大片 | av福利网址导航大全 | 色欧美视频 | 国产一二三区在线观看 | 四虎成人精品在永久免费 | 国产精品岛国久久久久久久久红粉 | 亚洲欧美视频在线 | 成人高清在线观看 | 久久九九九九 | 999成人 | 福利一区在线 | 三级黄色片在线观看 | www.久久91| av免费观看网站 | 久草在线免费看视频 | 久草在线资源免费 | 久草免费在线视频观看 | 91日韩国产 | 99精品在线视频观看 | 91成人在线视频观看 | 亚洲经典在线 | 色偷偷88888欧美精品久久久 | 国产精品久久久久久久久久久久久 | 在线国产黄色 | 欧美日本国产在线观看 | 亚洲国产中文字幕在线 | av一本久道久久波多野结衣 | 人人看看人人 | 欧美日韩在线视频免费 | 国产亚洲精品久久久久久电影 | 91女子私密保健养生少妇 | 天天操夜夜叫 | 精品久久久一区二区 | 婷婷99 | 超碰成人免费电影 | 免费又黄又爽视频 | 91精品在线观看视频 | 91精品国产综合久久福利 | 天天做天天爱夜夜爽 | 97超碰伊人| 天天操天天舔天天干 | 久久99国产精品视频 | 二区三区中文字幕 | 在线视频日韩一区 | 色鬼综合网 | 免费看片成人 | 超碰97人人射妻 | 久久视频网 | 99精彩视频 | 国产精品久久久久久久久久久久冷 | 色狠狠综合 | 干干操操| 欧美黄色特级片 | www178ccom视频在线| 在线有码中文字幕 | 成人黄色在线观看视频 | 久久婷婷久久 | 成人超碰97| 成年人在线观看视频免费 | 婷婷伊人综合 | 中文字幕综合在线 | 久草在线观看视频免费 | 欧美极度另类性三渗透 | 亚洲五月 | 深夜免费福利网站 | 一区二区三区高清在线 | 天天爽网站 | 欧美成年性 | 97福利视频 | 天天在线视频色 | 欧美久久久久久久久中文字幕 | 少妇18xxxx性xxxx片 | 欧美精品一区二区免费 | 中文不卡视频 | 丁香视频 | 久久99精品视频 | 麻豆超碰 | 成年人在线免费看 | 在线观看av小说 | 在线播放国产精品 | 日韩一二区在线观看 | 国产精品久久久久久99 | 久久视| 成年人视频免费在线 | 免费在线观看av网址 | 欧美精品生活片 | 水蜜桃亚洲一二三四在线 | 久久精品一区二 | 日韩成人免费电影 | 天堂网在线视频 | www.久久久久| 欧美日韩啪啪 | 六月丁香婷婷久久 | 欧美va天堂va视频va在线 | 久久不卡国产精品一区二区 | 日批网站免费观看 | 全久久久久久久久久久电影 | 97超碰在线久草超碰在线观看 | 亚洲jizzjizz日本少妇 | 中文亚洲欧美日韩 | 9797在线看片亚洲精品 | 欧美视频二区 | 最近中文字幕久久 | 欧美一区二区在线免费观看 | 国产又粗又猛又爽又黄的视频先 | 国产小视频91 | 亚洲精品视频免费在线观看 | 播五月综合 | 久久久久国产精品一区二区 | www.午夜色.com | 天天综合网 天天 | 精品国产一区二区三区久久久蜜月 | 国产高清不卡一区二区三区 | 国内精品久久久久影院优 | 韩日在线一区 | 91porny九色在线播放 | 国产精品18久久久久vr手机版特色 | 国产成人精品女人久久久 | 亚洲一级黄色 | 91看片在线观看 | 欧美一区二区三区激情视频 | 人人干天天干 | 一二区精品 | 欧美激情综合色 | 韩国av一区二区三区在线观看 | av一本久道久久波多野结衣 | 国产精品99久久99久久久二8 | 久久久国产精品亚洲一区 | 综合天天网 | 天天操 夜夜操 | 成人视屏免费看 | 99久久精品国产网站 | 91麻豆精品国产91久久久使用方法 | 色com网 | 精品一区二区三区在线播放 | 天天操夜夜操天天射 | 手机在线日韩视频 | 日韩欧美区 | av资源在线看 | 综合久久婷婷 | 在线观看91久久久久久 | 亚洲永久精品视频 | 精品国产大片 | 色.www| 亚洲一区二区精品在线 | 国产精品s色 | 91在线免费视频 | 欧美日韩激情视频8区 | 亚洲精品美女久久 | 日韩中文免费视频 | 午夜在线免费观看视频 | www黄com| 国产美女精品视频 | 四虎永久视频 | 美女在线观看网站 | 色综合五月 | 久久国产精品视频免费看 | 国产精品视频免费 | 国模视频一区二区三区 | 99精品国产高清在线观看 | 99这里只有精品视频 | 人人超碰在线 | 色99导航 | 精品久久久久久亚洲综合网 | 国产日韩欧美在线影视 | 精品久久一区二区 | 探花视频在线观看免费 | 日韩中文在线视频 | www.国产高清 | 六月丁香在线观看 | 美女视频黄是免费的 | 免费观看91视频 | 国产小视频你懂的在线 | 国产96在线 | 免费看片成年人 | 激情小说网站亚洲综合网 | 黄影院| 五月天六月婷 | 色婷婷福利 | 波多野结衣视频一区二区三区 | 久久夜夜操 | 人人射人人射 | 欧美一区二区三区在线视频观看 | 69视频在线| 国产激情免费 | 久久久视屏 | 欧美一级专区免费大片 | 国产精品久久久久婷婷二区次 | 91精品在线播放 | 成人黄色资源 | 日韩在线播放视频 | 美女福利视频在线 | 国产区精品区 | 久香蕉 | 成人av网址大全 | 激情五月婷婷 | 黄色毛片视频免费观看中文 | 国产.精品.日韩.另类.中文.在线.播放 | 奇米影视999 | 亚洲撸撸| 超碰公开在线 | 国产一卡久久电影永久 | 国产精品九九九九九九 | 久久免费精品视频 | 精品国产一区二区久久 | 国产精品久久久毛片 | 精品96久久久久久中文字幕无 | 在线观看视频国产一区 | 西西444www大胆无视频 | 国产一级二级在线观看 | 久久激情小视频 | 日韩久久视频 | 丝袜美女在线 | 夜夜婷婷 | 一级黄色片在线播放 | 国产在线视频一区 | 久久久久电影 | 国产精品1区2区 | 亚洲一区二区高潮无套美女 | 日韩精品一区二区不卡 | 久久久久久久久久久综合 | 中文字幕黄色网 | 久久系列 | 日韩av午夜在线观看 | 成人在线免费小视频 | 成年人在线免费看片 | 国产色婷婷精品综合在线手机播放 | 网站在线观看日韩 | 999视频在线播放 | 亚洲精品视频www | 国产不卡毛片 | 亚洲黄网站 | 97人人模人人爽人人少妇 | 狠狠色丁香 | 中文字幕 国产视频 | 国产欧美精品在线观看 | 亚洲精品视频中文字幕 | 最新国产在线视频 | 婷婷在线视频观看 | 俺要去色综合狠狠 | 99久久久久久国产精品 | 成人精品福利 | 91色视频 |