日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Assignment | 05-week2 -Part_2-Emojify!

發布時間:2024/1/1 编程问答 37 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Assignment | 05-week2 -Part_2-Emojify! 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

該系列僅在原課程基礎上課后作業部分添加個人學習筆記,如有錯誤,還請批評指教。- ZJ

Coursera 課程 |deeplearning.ai |網易云課堂

CSDN:http://blog.csdn.net/JUNJUN_ZHAO/article/details/79470246


Welcome to the second assignment of Week 2. You are going to use word vector representations to build an Emojifier.

Have you ever wanted to make your text messages more expressive? Your emojifier app will help you do that. So rather than writing “Congratulations on the promotion! Lets get coffee and talk. Love you!” the emojifier can automatically turn this into “Congratulations on the promotion! ? Lets get coffee and talk. ?? Love you! ??”

You will implement a model which inputs a sentence (such as “Let’s go see the baseball game tonight!”) and finds the most appropriate emoji to be used with this sentence (??). In many emoji interfaces, you need to remember that ?? is the “heart” symbol rather than the “love” symbol. But using word vectors, you’ll see that even if your training set explicitly relates only a few words to a particular emoji, your algorithm will be able to generalize and associate words in the test set to the same emoji even if those words don’t even appear in the training set. This allows you to build an accurate classifier mapping from sentences to emojis, even using a small training set.

In this exercise, you’ll start with a baseline model (Emojifier-V1) using word embeddings, then build a more sophisticated model (Emojifier-V2) that further incorporates an LSTM.

Lets get started! Run the following cell to load the package you are going to use.

import numpy as np from emo_utils import * import emoji import matplotlib.pyplot as plt%matplotlib inline ''' emo_utils.py'''import csv import numpy as np import emoji import pandas as pd import matplotlib.pyplot as plt from sklearn.metrics import confusion_matrixdef read_glove_vecs(glove_file):with open(glove_file, 'r', encoding='utf-8') as f:words = set()word_to_vec_map = {}for line in f:line = line.strip().split()curr_word = line[0]words.add(curr_word)word_to_vec_map[curr_word] = np.array(line[1:], dtype=np.float64)i = 1words_to_index = {}index_to_words = {}for w in sorted(words):words_to_index[w] = iindex_to_words[i] = wi = i + 1return words_to_index, index_to_words, word_to_vec_mapdef softmax(x):"""Compute softmax values for each sets of scores in x."""e_x = np.exp(x - np.max(x))return e_x / e_x.sum()def read_csv(filename = 'data/emojify_data.csv'):phrase = []emoji = []with open (filename) as csvDataFile:csvReader = csv.reader(csvDataFile)for row in csvReader:phrase.append(row[0])emoji.append(row[1])X = np.asarray(phrase)Y = np.asarray(emoji, dtype=int)return X, Ydef convert_to_one_hot(Y, C):Y = np.eye(C)[Y.reshape(-1)]return Yemoji_dictionary = {"0": "\u2764\uFE0F", # :heart: prints a black instead of red heart depending on the font"1": ":baseball:","2": ":smile:","3": ":disappointed:","4": ":fork_and_knife:"}def label_to_emoji(label):"""Converts a label (int or string) into the corresponding emoji code (string) ready to be printed"""return emoji.emojize(emoji_dictionary[str(label)], use_aliases=True)def print_predictions(X, pred):print()for i in range(X.shape[0]):print(X[i], label_to_emoji(int(pred[i])))def plot_confusion_matrix(y_actu, y_pred, title='Confusion matrix', cmap=plt.cm.gray_r):df_confusion = pd.crosstab(y_actu, y_pred.reshape(y_pred.shape[0],), rownames=['Actual'], colnames=['Predicted'], margins=True)df_conf_norm = df_confusion / df_confusion.sum(axis=1)plt.matshow(df_confusion, cmap=cmap) # imshow#plt.title(title)plt.colorbar()tick_marks = np.arange(len(df_confusion.columns))plt.xticks(tick_marks, df_confusion.columns, rotation=45)plt.yticks(tick_marks, df_confusion.index)#plt.tight_layout()plt.ylabel(df_confusion.index.name)plt.xlabel(df_confusion.columns.name)def predict(X, Y, W, b, word_to_vec_map):"""Given X (sentences) and Y (emoji indices), predict emojis and compute the accuracy of your model over the given set.Arguments:X -- input data containing sentences, numpy array of shape (m, None)Y -- labels, containing index of the label emoji, numpy array of shape (m, 1)Returns:pred -- numpy array of shape (m, 1) with your predictions"""m = X.shape[0]pred = np.zeros((m, 1))for j in range(m): # Loop over training examples# Split jth test example (sentence) into list of lower case wordswords = X[j].lower().split()# Average words' vectorsavg = np.zeros((50,))for w in words:avg += word_to_vec_map[w]avg = avg/len(words)# Forward propagationZ = np.dot(W, avg) + bA = softmax(Z)pred[j] = np.argmax(A)print("Accuracy: " + str(np.mean((pred[:] == Y.reshape(Y.shape[0],1)[:]))))return pred

1 - Baseline model: Emojifier-V1

1.1 - Dataset EMOJISET

Let’s start by building a simple baseline classifier.

You have a tiny dataset (X, Y) where:
- X contains 127 sentences (strings)
- Y contains a integer label between 0 and 4 corresponding to an emoji for each sentence


Figure 1: EMOJISET - a classification problem with 5 classes. A few examples of sentences are given here.

Let’s load the dataset using the code below. We split the dataset between training (127 examples) and testing (56 examples).

X_train, Y_train = read_csv('data/train_emoji.csv') X_test, Y_test = read_csv('data/tesss.csv') maxLen = len(max(X_train, key=len).split())

Run the following cell to print sentences from X_train and corresponding labels from Y_train. Change index to see different examples. Because of the font the iPython notebook uses, the heart emoji may be colored black rather than red.

index = 7 print(X_train[index], label_to_emoji(Y_train[index])) congratulations on your acceptance ?

1.2 - Overview of the Emojifier-V1

In this part, you are going to implement a baseline model called “Emojifier-v1”.


Figure 2: Baseline model (Emojifier-V1).

The input of the model is a string corresponding to a sentence (e.g. “I love you). In the code, the output will be a probability vector of shape (1,5), that you then pass in an argmax layer to extract the index of the most likely emoji output.

To get our labels into a format suitable for training a softmax classifier, lets convert YY from its current shape current shape (m,1)(m,1) into a “one-hot representation” (m,5)(m,5), where each row is a one-hot vector giving the label of one example, You can do so using this next code snipper. Here, Y_oh stands for “Y-one-hot” in the variable names Y_oh_train and Y_oh_test:

Y_oh_train = convert_to_one_hot(Y_train, C = 5) Y_oh_test = convert_to_one_hot(Y_test, C = 5)

Let’s see what convert_to_one_hot() did. Feel free to change index to print out different values.

index = 50 print(Y_train[index], "is converted into one hot", Y_oh_train[index]) 0 is converted into one hot [1. 0. 0. 0. 0.]

All the data is now ready to be fed into the Emojify-V1 model. Let’s implement the model!

1.3 - Implementing Emojifier-V1

As shown in Figure (2), the first step is to convert an input sentence into the word vector representation, which then get averaged together. Similar to the previous exercise, we will use pretrained 50-dimensional GloVe embeddings. Run the following cell to load the word_to_vec_map, which contains all the vector representations.

如圖(2)所示,第一步是將輸入語句轉換為單詞向量表示,然后將其平均到一起。 與之前的練習類似,我們將使用預訓練的 50 維 GloVe 嵌入。 運行以下單元格以加載包含所有向量表示的word_to_vec_map。

word_to_index, index_to_word, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt')

You’ve loaded:
- word_to_index: dictionary mapping from words to their indices in the vocabulary (400,001 words, with the valid indices ranging from 0 to 400,000)
- index_to_word: dictionary mapping from indices to their corresponding words in the vocabulary
- word_to_vec_map: dictionary mapping words to their GloVe vector representation.

Run the following cell to check if it works.

word = "cucumber" index = 289846 print("the index of", word, "in the vocabulary is", word_to_index[word]) print("the", str(index) + "th word in the vocabulary is", index_to_word[index]) the index of cucumber in the vocabulary is 113317 the 289846th word in the vocabulary is potatos

Exercise: Implement sentence_to_avg(). You will need to carry out two steps:
1. Convert every sentence to lower-case, then split the sentence into a list of words. X.lower() and X.split() might be useful.
2. For each word in the sentence, access its GloVe representation. Then, average all these values.

# GRADED FUNCTION: sentence_to_avgdef sentence_to_avg(sentence, word_to_vec_map):"""Converts a sentence (string) into a list of words (strings). Extracts the GloVe representation of each wordand averages its value into a single vector encoding the meaning of the sentence.Arguments:sentence -- string, one training example from Xword_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representationReturns:avg -- average vector encoding information about the sentence, numpy-array of shape (50,)"""### START CODE HERE #### Step 1: Split sentence into list of lower case words (≈ 1 line)words = sentence.lower().split()# Initialize the average word vector, should have the same shape as your word vectors.avg = np.zeros((50,))# Step 2: average the word vectors. You can loop over the words in the list "words".for w in words:avg += word_to_vec_map[w]avg = avg/len(words)### END CODE HERE ###return avg avg = sentence_to_avg("Morrocan couscous is my favorite dish", word_to_vec_map) print("avg = ", avg) avg = [-0.008005 0.56370833 -0.50427333 0.258865 0.55131103 0.03104983-0.21013718 0.16893933 -0.09590267 0.141784 -0.15708967 0.185258670.6495785 0.38371117 0.21102167 0.11301667 0.02613967 0.260377670.05820667 -0.01578167 -0.12078833 -0.02471267 0.4128455 0.51520610.38756167 -0.898661 -0.535145 0.33501167 0.68806933 -0.21562651.797155 0.10476933 -0.36775333 0.750785 0.10282583 0.348925-0.27262833 0.66768 -0.10706167 -0.283635 0.59580117 0.28747333-0.3366635 0.23393817 0.34349183 0.178405 0.1166155 -0.0764330.1445417 0.09808667]

Expected Output:

**avg= ** [-0.008005 0.56370833 -0.50427333 0.258865 0.55131103 0.03104983 -0.21013718 0.16893933 -0.09590267 0.141784 -0.15708967 0.18525867 0.6495785 0.38371117 0.21102167 0.11301667 0.02613967 0.26037767 0.05820667 -0.01578167 -0.12078833 -0.02471267 0.4128455 0.5152061 0.38756167 -0.898661 -0.535145 0.33501167 0.68806933 -0.2156265 1.797155 0.10476933 -0.36775333 0.750785 0.10282583 0.348925 -0.27262833 0.66768 -0.10706167 -0.283635 0.59580117 0.28747333 -0.3366635 0.23393817 0.34349183 0.178405 0.1166155 -0.076433 0.1445417 0.09808667]

Model

You now have all the pieces to finish implementing the model() function. After using sentence_to_avg() you need to pass the average through forward propagation, compute the cost, and then backpropagate to update the softmax’s parameters.

Exercise: Implement the model() function described in Figure (2). Assuming here that YohYoh (“Y one hot”) is the one-hot encoding of the output labels, the equations you need to implement in the forward pass and to compute the cross-entropy cost are:

z(i)=W.avg(i)+bz(i)=W.avg(i)+b
a(i)=softmax(z(i))a(i)=softmax(z(i))
L(i)=?k=0ny?1Yoh(i)k?log(a(i)k)L(i)=?∑k=0ny?1Yohk(i)?log(ak(i))

It is possible to come up with a more efficient vectorized implementation. But since we are using a for-loop to convert the sentences one at a time into the avg^{(i)} representation anyway, let’s not bother this time.

We provided you a function softmax().

# GRADED FUNCTION: modeldef model(X, Y, word_to_vec_map, learning_rate = 0.01, num_iterations = 400):"""Model to train word vector representations in numpy.Arguments:X -- input data, numpy array of sentences as strings, of shape (m, 1)Y -- labels, numpy array of integers between 0 and 7, numpy-array of shape (m, 1)word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representationlearning_rate -- learning_rate for the stochastic gradient descent algorithmnum_iterations -- number of iterationsReturns:pred -- vector of predictions, numpy-array of shape (m, 1)W -- weight matrix of the softmax layer, of shape (n_y, n_h)b -- bias of the softmax layer, of shape (n_y,)"""np.random.seed(1)# Define number of training examplesm = Y.shape[0] # number of training examplesn_y = 5 # number of classes n_h = 50 # dimensions of the GloVe vectors # Initialize parameters using Xavier initializationW = np.random.randn(n_y, n_h) / np.sqrt(n_h)b = np.zeros((n_y,))# Convert Y to Y_onehot with n_y classesY_oh = convert_to_one_hot(Y, C = n_y) # Optimization loopfor t in range(num_iterations): # Loop over the number of iterationsfor i in range(m): # Loop over the training examples### START CODE HERE ### (≈ 4 lines of code)# Average the word vectors of the words from the i'th training example X ,X 訓練樣本中的 第 i 個樣本avg = sentence_to_avg(X[i], word_to_vec_map)# Forward propagate the avg through the softmax layerz = np.dot(W, avg) + ba = softmax(z)# Compute cost using the i'th training label's one hot representation and "A" (the output of the softmax)cost = -np.sum(Y_oh[i]*np.log(a))### END CODE HERE #### Compute gradients dz = a - Y_oh[i]dW = np.dot(dz.reshape(n_y,1), avg.reshape(1, n_h))db = dz# Update parameters with Stochastic Gradient DescentW = W - learning_rate * dWb = b - learning_rate * dbif t % 100 == 0:print("Epoch: " + str(t) + " --- cost = " + str(cost))pred = predict(X, Y, W, b, word_to_vec_map)return pred, W, b print(X_train.shape) print(Y_train.shape) print(np.eye(5)[Y_train.reshape(-1)].shape) print(X_train[0]) print(type(X_train)) Y = np.asarray([5,0,0,5, 4, 4, 4, 6, 6, 4, 1, 1, 5, 6, 6, 3, 6, 3, 4, 4]) print(Y.shape)X = np.asarray(['I am going to the bar tonight', 'I love you', 'miss you my dear','Lets go party and drinks','Congrats on the new job','Congratulations','I am so happy for you', 'Why are you feeling bad', 'What is wrong with you','You totally deserve this prize', 'Let us go play football','Are you down for football this afternoon', 'Work hard play harder','It is suprising how people can be dumb sometimes','I am very disappointed','It is the best day in my life','I think I will end up alone','My life is so boring','Good job','Great so awesome'])print(X.shape) print(np.eye(5)[Y_train.reshape(-1)].shape) print(type(X_train)) (132,) (132,) (132, 5) never talk to me again <class 'numpy.ndarray'> (20,) (20,) (132, 5) <class 'numpy.ndarray'>

Run the next cell to train your model and learn the softmax parameters (W,b).

pred, W, b = model(X_train, Y_train, word_to_vec_map) print(pred) Epoch: 0 --- cost = 1.952049881281007 Accuracy: 0.3484848484848485 Epoch: 100 --- cost = 0.07971818726014807 Accuracy: 0.9318181818181818 Epoch: 200 --- cost = 0.04456369243681402 Accuracy: 0.9545454545454546 Epoch: 300 --- cost = 0.03432267378786059 Accuracy: 0.9696969696969697 [[3.] [2.] [3.] [0.] [4.] [0.] [3.] [2.] [3.] [1.] [3.] [3.] [1.] [3.] [2.] [3.] [2.] [3.] [1.] [2.] [3.] [0.] [2.] [2.] [2.] [1.] [4.] [3.] [3.] [4.] [0.] [3.] [4.] [2.] [0.] [3.] [2.] [2.] [3.] [4.] [2.] [2.] [0.] [2.] [3.] [0.] [3.] [2.] [4.] [3.] [0.] [3.] [3.] [3.] [4.] [2.] [1.] [1.] [1.] [2.] [3.] [1.] [0.] [0.] [0.] [3.] [4.] [4.] [2.] [2.] [1.] [2.] [0.] [3.] [2.] [2.] [0.] [3.] [3.] [1.] [2.] [1.] [2.] [2.] [4.] [3.] [3.] [2.] [4.] [0.] [0.] [3.] [3.] [3.] [3.] [2.] [0.] [1.] [2.] [3.] [0.] [2.] [2.] [2.] [3.] [2.] [2.] [2.] [4.] [1.] [1.] [3.] [3.] [4.] [1.] [2.] [1.] [1.] [3.] [1.][0.] [4.] [0.] [3.] [3.] [4.] [4.] [1.] [4.] [3.] [0.] [2.]]

Expected Output (on a subset of iterations):

**Epoch: 0** cost = 1.95204988128 Accuracy: 0.348484848485
**Epoch: 100** cost = 0.0797181872601 Accuracy: 0.931818181818
**Epoch: 200** cost = 0.0445636924368 Accuracy: 0.954545454545
**Epoch: 300** cost = 0.0343226737879 Accuracy: 0.969696969697

Great! Your model has pretty high accuracy on the training set. Lets now see how it does on the test set.

1.4 - Examining test set performance

print("Training set:") pred_train = predict(X_train, Y_train, W, b, word_to_vec_map) print('Test set:') pred_test = predict(X_test, Y_test, W, b, word_to_vec_map) Training set: Accuracy: 0.9772727272727273 Test set: Accuracy: 0.8571428571428571

Expected Output:

**Train set accuracy** 97.7
**Test set accuracy** 85.7

Random guessing would have had 20% accuracy given that there are 5 classes. This is pretty good performance after training on only 127 examples.

In the training set, the algorithm saw the sentence “I love you” with the label ??. You can check however that the word “adore” does not appear in the training set. Nonetheless, lets see what happens if you write “I adore you.”

X_my_sentences = np.array(["i adore you", "i love you", "funny lol", "lets play with a ball", "food is ready", "not feeling happy"]) Y_my_labels = np.array([[0], [0], [2], [1], [4],[3]])pred = predict(X_my_sentences, Y_my_labels , W, b, word_to_vec_map) print_predictions(X_my_sentences, pred) Accuracy: 0.8333333333333334i adore you ?? i love you ?? funny lol ? lets play with a ball ? food is ready ? not feeling happy ?

Amazing! Because adore has a similar embedding as love, the algorithm has generalized correctly even to a word it has never seen before. Words such as heart, dear, beloved or adore have embedding vectors similar to love, and so might work too—feel free to modify the inputs above and try out a variety of input sentences. How well does it work?

Note though that it doesn’t get “not feeling happy” correct. This algorithm ignores word ordering, so is not good at understanding phrases like “not happy.”

Printing the confusion matrix can also help understand which classes are more difficult for your model. A confusion matrix shows how often an example whose label is one class (“actual” class) is mislabeled by the algorithm with a different class (“predicted” class).

print(Y_test.shape) print(' '+ label_to_emoji(0)+ ' ' + label_to_emoji(1) + ' ' + label_to_emoji(2)+ ' ' + label_to_emoji(3)+' ' + label_to_emoji(4)) print(pd.crosstab(Y_test, pred_test.reshape(56,), rownames=['Actual'], colnames=['Predicted'], margins=True)) plot_confusion_matrix(Y_test, pred_test) (56,)?? ? ? ? ? Predicted 0.0 1.0 2.0 3.0 4.0 All Actual 0 6 0 0 1 0 7 1 0 8 0 0 0 8 2 2 0 16 0 0 18 3 1 1 2 12 0 16 4 0 0 1 0 6 7 All 9 9 19 13 6 56


What you should remember from this part:
- Even with a 127 training examples, you can get a reasonably good model for Emojifying. This is due to the generalization power word vectors gives you.
- Emojify-V1 will perform poorly on sentences such as “This movie is not good and not enjoyable” because it doesn’t understand combinations of words–it just averages all the words’ embedding vectors together, without paying attention to the ordering of words. You will build a better algorithm in the next part.

  • 即使有127個訓練示例,您也可以獲得一個合理的良好模型進行Emojifying。 這是由于泛化詞向量賦予的。
  • Emojify-V1在諸如“這部電影不好,不愉快”等句子上表現不佳,因為它不理解單詞的組合 - 它只是將所有單詞的嵌入矢量集中在一起,而沒有關注 單詞排序。 您將在下一部分中構建一個更好的算法。

2 - Emojifier-V2: Using LSTMs in Keras:

Let’s build an LSTM model that takes as input word sequences. This model will be able to take word ordering into account. Emojifier-V2 will continue to use pre-trained word embeddings to represent words, but will feed them into an LSTM, whose job it is to predict the most appropriate emoji.

讓我們建立一個LSTM模型,將其作為輸入詞序列。 這個模型將能夠考慮文字排序。 Emojifier-V2將繼續使用預先訓練的單詞嵌入來表示單詞,但會將它們輸入到LSTM中,其工作是預測最合適的表情符號。

Run the following cell to load the Keras packages.

import numpy as np np.random.seed(0) from keras.models import Model from keras.layers import Dense, Input, Dropout, LSTM, Activation from keras.layers.embeddings import Embedding from keras.preprocessing import sequence from keras.initializers import glorot_uniform np.random.seed(1)

2.1 - Overview of the model

Here is the Emojifier-v2 you will implement:



Figure 3: Emojifier-V2. A 2-layer LSTM sequence classifier.

2.2 Keras and mini-batching

In this exercise, we want to train Keras using mini-batches. However, most deep learning frameworks require that all sequences in the same mini-batch have the same length. This is what allows vectorization to work: If you had a 3-word sentence and a 4-word sentence, then the computations needed for them are different (one takes 3 steps of an LSTM, one takes 4 steps) so it’s just not possible to do them both at the same time.

The common solution to this is to use padding. Specifically, set a maximum sequence length, and pad all sequences to the same length. For example, of the maximum sequence length is 20, we could pad every sentence with “0”s so that each input sentence is of length 20. Thus, a sentence “i love you” would be represented as (ei,elove,eyou,0??,0??,,0??)(ei,elove,eyou,0→,0→,…,0→). In this example, any sentences longer than 20 words would have to be truncated. One simple way to choose the maximum sequence length is to just pick the length of the longest sentence in the training set.

在這個練習中,我們想要使用小批量培訓Keras。然而,大多數深度學習框架要求同一個小批量中的所有序列具有相同的長度。這是允許矢量化工作的原因:如果你有一個3字的句子和一個4字的句子,那么他們所需要的計算是不同的(一個需要3個步驟的LSTM,一個需要4個步驟),所以這是不可能的同時做到這一點。

常見的解決方法是使用填充。具體而言,設置最大序列長度,并將所有序列填充到相同長度。例如,最大序列長度為20,我們可以用“0”填充每個句子,使得每個輸入句子的長度為20.因此,句子“我愛你”將被表示為(ei,elove,eyou,0??,0??,,0??)(ei,elove,eyou,0→,0→,…,0→)。在這個例子中,任何超過20個單詞的句子都必須被截斷。選擇最大序列長度的一個簡單方法是只選擇訓練集中最長句子的長度。

2.3 - The Embedding layer

In Keras, the embedding matrix is represented as a “layer”, and maps positive integers (indices corresponding to words) into dense vectors of fixed size (the embedding vectors). It can be trained or initialized with a pretrained embedding. In this part, you will learn how to create an Embedding() layer in Keras, initialize it with the GloVe 50-dimensional vectors loaded earlier in the notebook. Because our training set is quite small, we will not update the word embeddings but will instead leave their values fixed. But in the code below, we’ll show you how Keras allows you to either train or leave fixed this layer.

The Embedding() layer takes an integer matrix of size (batch size, max input length) as input. This corresponds to sentences converted into lists of indices (integers), as shown in the figure below.

在Keras中,嵌入矩陣表示為“圖層”,并將正整數(與單詞對應的索引)映射到固定大小的密集向量(嵌入向量)。 它可以通過預訓練嵌入進行訓練或初始化。 在這一部分中,您將學習如何在Keras中創建一個嵌入()圖層,并使用早先在筆記本中加載的GloVe 50維矢量進行初始化。 因為我們的訓練集非常小,我們不會更新嵌入的單詞,而是會固定它們的值。 但在下面的代碼中,我們將向您展示Keras如何讓您能夠訓練或離開固定此圖層。

Embedding()圖層將大小的整數矩陣(批量大小,最大輸入長度)作為輸入。 這對應于轉換為索引列表(整數)的句子,如下圖所示。


Figure 4: Embedding layer. This example shows the propagation of two examples through the embedding layer. Both have been zero-padded to a length of max_len=5. The final dimension of the representation is (2,max_len,50) because the word embeddings we are using are 50 dimensional.

The largest integer (i.e. word index) in the input should be no larger than the vocabulary size. The layer outputs an array of shape (batch size, max input length, dimension of word vectors).

The first step is to convert all your training sentences into lists of indices, and then zero-pad all these lists so that their length is the length of the longest sentence.

輸入中最大的整數(即單詞索引)不應大于詞匯大小。 該圖層輸出形狀數組(批量大小,最大輸入長度,單詞向量的維數)。

第一步是將所有訓練語句轉換為索引列表,然后將所有這些列表填零,以使其長度為最長句子的長度。

Exercise: Implement the function below to convert X (array of sentences as strings) into an array of indices corresponding to words in the sentences. The output shape should be such that it can be given to Embedding() (described in Figure 4).

實現下面的函數,將X(作為字符串的句子數組)轉換為與句子中的單詞相對應的索引數組。 輸出形狀應該可以賦予Embedding()(如圖4所示)。

# GRADED FUNCTION: sentences_to_indicesdef sentences_to_indices(X, word_to_index, max_len):"""Converts an array of sentences (strings) into an array of indices corresponding to words in the sentences.The output shape should be such that it can be given to `Embedding()` (described in Figure 4). Arguments:X -- array of sentences (strings), of shape (m, 1)word_to_index -- a dictionary containing the each word mapped to its indexmax_len -- maximum number of words in a sentence. You can assume every sentence in X is no longer than this. Returns:X_indices -- array of indices corresponding to words in the sentences from X, of shape (m, max_len)"""m = X.shape[0] # number of training examples### START CODE HERE #### Initialize X_indices as a numpy matrix of zeros and the correct shape (≈ 1 line)X_indices = np.zeros((m, max_len))for i in range(m): # loop over training examples# Convert the ith training sentence in lower case and split is into words. You should get a list of words.sentence_words =X[i].lower().split()# Initialize j to 0j = 0# Loop over the words of sentence_wordsfor w in sentence_words:# Set the (i,j)th entry of X_indices to the index of the correct word.X_indices[i, j] = word_to_index[w]# Increment j to j + 1j = j + 1### END CODE HERE ###return X_indices

Run the following cell to check what sentences_to_indices() does, and check your results.

X1 = np.array(["funny lol", "lets play baseball", "food is ready for you"]) X1_indices = sentences_to_indices(X1,word_to_index, max_len = 5) print("X1 =", X1) print("X1_indices =", X1_indices) X1 = ['funny lol' 'lets play baseball' 'food is ready for you'] X1_indices = [[155345. 225122. 0. 0. 0.][220930. 286375. 69714. 0. 0.][151204. 192973. 302254. 151349. 394475.]]

Expected Output:

**X1 =** [‘funny lol’ ‘lets play football’ ‘food is ready for you’]
**X1_indices =** [[ 155345. 225122. 0. 0. 0.]
[ 220930. 286375. 151266. 0. 0.]
[ 151204. 192973. 302254. 151349. 394475.]]

Let’s build the Embedding() layer in Keras, using pre-trained word vectors. After this layer is built, you will pass the output of sentences_to_indices() to it as an input, and the Embedding() layer will return the word embeddings for a sentence.

Exercise: Implement pretrained_embedding_layer(). You will need to carry out the following steps:
1. Initialize the embedding matrix as a numpy array of zeroes with the correct shape.
2. Fill in the embedding matrix with all the word embeddings extracted from word_to_vec_map.
3. Define Keras embedding layer. Use Embedding(). Be sure to make this layer non-trainable, by setting trainable = False when calling Embedding(). If you were to set trainable = True, then it will allow the optimization algorithm to modify the values of the word embeddings.
4. Set the embedding weights to be equal to the embedding matrix

# GRADED FUNCTION: pretrained_embedding_layerdef pretrained_embedding_layer(word_to_vec_map, word_to_index):"""Creates a Keras Embedding() layer and loads in pre-trained GloVe 50-dimensional vectors.Arguments:word_to_vec_map -- dictionary mapping words to their GloVe vector representation.word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)Returns:embedding_layer -- pretrained layer Keras instance"""vocab_len = len(word_to_index) + 1 # adding 1 to fit Keras embedding (requirement) 看提示emb_dim = word_to_vec_map["cucumber"].shape[0] # define dimensionality of your GloVe word vectors (= 50)### START CODE HERE #### Initialize the embedding matrix as a numpy array of zeros of shape (vocab_len, dimensions of word vectors = emb_dim)emb_matrix = np.zeros((vocab_len, emb_dim))# Set each row "index" of the embedding matrix to be the word vector representation of the "index"th word of the vocabularyfor word, index in word_to_index.items():emb_matrix[index, :] = word_to_vec_map[word]# Define Keras embedding layer with the correct output/input sizes, make it trainable. Use Embedding(...). Make sure to set trainable=False. # adding 1 to fit Keras embedding (requirement) 看提示# define dimensionality of your GloVe word vectors (= 50)embedding_layer = Embedding(vocab_len, emb_dim,trainable=False)### END CODE HERE #### Build the embedding layer, it is required before setting the weights of the embedding layer. Do not modify the "None".embedding_layer.build((None,))# Set the weights of the embedding layer to the embedding matrix. Your layer is now pretrained.embedding_layer.set_weights([emb_matrix])return embedding_layer embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index) print("weights[0][1][3] =", embedding_layer.get_weights()[0][1][3]) weights[0][1][3] = -0.3403

Expected Output:

**weights[0][1][3] =** -0.3403

2.3 Building the Emojifier-V2

Lets now build the Emojifier-V2 model. You will do so using the embedding layer you have built, and feed its output to an LSTM network.



Figure 3: Emojifier-v2. A 2-layer LSTM sequence classifier.

Exercise: Implement Emojify_V2(), which builds a Keras graph of the architecture shown in Figure 3. The model takes as input an array of sentences of shape (m, max_len, ) defined by input_shape. It should output a softmax probability vector of shape (m, C = 5). You may need Input(shape = ..., dtype = '...'), LSTM(), Dropout(), Dense(), and Activation().

# GRADED FUNCTION: Emojify_V2def Emojify_V2(input_shape, word_to_vec_map, word_to_index):"""Function creating the Emojify-v2 model's graph.Arguments:input_shape -- shape of the input, usually (max_len,)word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representationword_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)Returns:model -- a model instance in Keras"""### START CODE HERE #### Define sentence_indices as the input of the graph, it should be of shape input_shape and dtype 'int32' (as it contains indices).sentence_indices = Input(shape=input_shape, dtype='int32')# Create the embedding layer pretrained with GloVe Vectors (≈1 line)embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)# Propagate sentence_indices through your embedding layer, you get back the embeddings 通過您的嵌入層傳播句子索引,您可以找回詞嵌embeddings = embedding_layer(sentence_indices)# Propagate the embeddings through an LSTM layer with 128-dimensional hidden state# Be careful, the returned output should be a batch of sequences. 要小心,返回的輸出應該是一批序列。X = LSTM(128, return_sequences=True)(embeddings)# Add dropout with a probability of 0.5X = Dropout(0.5)(X)# Propagate X trough another LSTM layer with 128-dimensional hidden state# Be careful, the returned output should be a single hidden state, not a batch of sequences.X = LSTM(128, return_sequences=False)(X)# Add dropout with a probability of 0.5X = Dropout(0.5)(X)# Propagate X through a Dense layer with softmax activation to get back a batch of 5-dimensional vectors.X = Dense(5, activation='softmax')(X)# Add a softmax activationX = Activation('softmax')(X)# Create Model instance which converts sentence_indices into X.model = Model(inputs=sentence_indices ,outputs=X)### END CODE HERE ###return model

Run the following cell to create your model and check its summary. Because all sentences in the dataset are less than 10 words, we chose max_len = 10. You should see your architecture, it uses “20,223,927” parameters, of which 20,000,050 (the word embeddings) are non-trainable, and the remaining 223,877 are. Because our vocabulary size has 400,001 words (with valid indices from 0 to 400,000) there are 400,001*50 = 20,000,050 non-trainable parameters.

model = Emojify_V2((maxLen,), word_to_vec_map, word_to_index) model.summary() _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) (None, 10) 0 _________________________________________________________________ embedding_2 (Embedding) (None, 10, 50) 20000050 _________________________________________________________________ lstm_1 (LSTM) (None, 10, 128) 91648 _________________________________________________________________ dropout_1 (Dropout) (None, 10, 128) 0 _________________________________________________________________ lstm_2 (LSTM) (None, 128) 131584 _________________________________________________________________ dropout_2 (Dropout) (None, 128) 0 _________________________________________________________________ dense_1 (Dense) (None, 5) 645 _________________________________________________________________ activation_1 (Activation) (None, 5) 0 ================================================================= Total params: 20,223,927 Trainable params: 223,877 Non-trainable params: 20,000,050 _________________________________________________________________

As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using categorical_crossentropy loss, adam optimizer and ['accuracy'] metrics:

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

It’s time to train your model. Your Emojifier-V2 model takes as input an array of shape (m, max_len) and outputs probability vectors of shape (m, number of classes). We thus have to convert X_train (array of sentences as strings) to X_train_indices (array of sentences as list of word indices), and Y_train (labels as indices) to Y_train_oh (labels as one-hot vectors).

X_train_indices = sentences_to_indices(X_train, word_to_index, maxLen) Y_train_oh = convert_to_one_hot(Y_train, C = 5)

Fit the Keras model on X_train_indices and Y_train_oh. We will use epochs = 50 and batch_size = 32.

model.fit(X_train_indices, Y_train_oh, epochs = 50, batch_size = 32, shuffle=True) Epoch 1/50 132/132 [==============================] - 3s 21ms/step - loss: 1.6086 - acc: 0.1818 Epoch 2/50 132/132 [==============================] - 0s 773us/step - loss: 1.5870 - acc: 0.3409 Epoch 3/50 132/132 [==============================] - 0s 773us/step - loss: 1.5725 - acc: 0.2652 ............ Epoch 37/50 132/132 [==============================] - 0s 713us/step - loss: 1.2161 - acc: 0.6894 Epoch 38/50 132/132 [==============================] - 0s 796us/step - loss: 1.2403 - acc: 0.6591 Epoch 39/50 132/132 [==============================] - 0s 841us/step - loss: 1.2404 - acc: 0.6591 Epoch 40/50 132/132 [==============================] - 0s 872us/step - loss: 1.2219 - acc: 0.6742 Epoch 41/50 132/132 [==============================] - 0s 834us/step - loss: 1.2183 - acc: 0.6818 Epoch 42/50 132/132 [==============================] - 0s 917us/step - loss: 1.1985 - acc: 0.6970 Epoch 43/50 132/132 [==============================] - 0s 864us/step - loss: 1.1996 - acc: 0.6970 Epoch 44/50 132/132 [==============================] - 0s 993us/step - loss: 1.1839 - acc: 0.7197 Epoch 45/50 132/132 [==============================] - 0s 834us/step - loss: 1.1949 - acc: 0.7121 Epoch 46/50 132/132 [==============================] - 0s 758us/step - loss: 1.1841 - acc: 0.7121 Epoch 47/50 132/132 [==============================] - 0s 781us/step - loss: 1.1618 - acc: 0.7424 Epoch 48/50 132/132 [==============================] - 0s 796us/step - loss: 1.1614 - acc: 0.7348 Epoch 49/50 132/132 [==============================] - 0s 773us/step - loss: 1.1440 - acc: 0.7727 Epoch 50/50 132/132 [==============================] - 0s 758us/step - loss: 1.1098 - acc: 0.7955<keras.callbacks.History at 0x237004d0518>

Your model should perform close to 100% accuracy on the training set. The exact accuracy you get may be a little different. Run the following cell to evaluate your model on the test set.

X_test_indices = sentences_to_indices(X_test, word_to_index, max_len = maxLen) Y_test_oh = convert_to_one_hot(Y_test, C = 5) loss, acc = model.evaluate(X_test_indices, Y_test_oh) print() print("Test accuracy = ", acc) 56/56 [==============================] - 0s 2ms/stepTest accuracy = 0.839285705770765

You should get a test accuracy between 80% and 95%. Run the cell below to see the mislabelled examples.

# This code allows you to see the mislabelled examples C = 5 y_test_oh = np.eye(C)[Y_test.reshape(-1)] X_test_indices = sentences_to_indices(X_test, word_to_index, maxLen) pred = model.predict(X_test_indices) for i in range(len(X_test)):x = X_test_indicesnum = np.argmax(pred[i])if(num != Y_test[i]):print('Expected emoji:'+ label_to_emoji(Y_test[i]) + ' prediction: '+ X_test[i] + label_to_emoji(num).strip()) Expected emoji:? prediction: she got me a nice present ?? Expected emoji:? prediction: work is hard ? Expected emoji:? prediction: This girl is messing with me ?? Expected emoji:? prediction: This stupid grader is not working ?? Expected emoji:? prediction: work is horrible ? Expected emoji:? prediction: you brighten my day ?? Expected emoji:? prediction: she is a bully ?? Expected emoji:? prediction: Why are you feeling bad ?? Expected emoji:? prediction: My life is so boring ??

Now you can try it on your own example. Write your own sentence below.

# Change the sentence below to see your prediction. Make sure all the words are in the Glove embeddings. x_test = np.array(['not feeling happy']) X_test_indices = sentences_to_indices(x_test, word_to_index, maxLen) print(x_test[0] +' '+ label_to_emoji(np.argmax(model.predict(X_test_indices)))) not feeling happy ?

Previously, Emojify-V1 model did not correctly label “not feeling happy,” but our implementation of Emojiy-V2 got it right. (Keras’ outputs are slightly random each time, so you may not have obtained the same result.) The current model still isn’t very robust at understanding negation (like “not happy”) because the training set is small and so doesn’t have a lot of examples of negation. But if the training set were larger, the LSTM model would be much better than the Emojify-V1 model at understanding such complex sentences.

以前,Emojify-V1 模型沒有正確標注“不快樂”,但我們的 Emojiy-V2 的實現是正確的。 (Keras 的輸出每次都是隨機的,所以你可能沒有得到相同的結果。)目前的模型在理解否定(如“不高興”)方面仍然不是很穩健,因為訓練集很小, 有很多否定的例子。 但是如果訓練集較大,在理解這樣復雜的句子時,LSTM 模型會比 Emojify-V1 模型好得多。

Congratulations!

You have completed this notebook! ??????


What you should remember:
- If you have an NLP task where the training set is small, using word embeddings can help your algorithm significantly. Word embeddings allow your model to work on words in the test set that may not even have appeared in your training set.
- Training sequence models in Keras (and in most other deep learning frameworks) requires a few important details:
- To use mini-batches, the sequences need to be padded so that all the examples in a mini-batch have the same length.
- An Embedding() layer can be initialized with pretrained values. These values can be either fixed or trained further on your dataset. If however your labeled dataset is small, it’s usually not worth trying to train a large pre-trained set of embeddings.
- LSTM() has a flag called return_sequences to decide if you would like to return every hidden states or only the last one.
- You can use Dropout() right after LSTM() to regularize your network.

  • 如果您的NLP任務的訓練集較小,則使用詞嵌入可以顯著幫助您的算法。 字嵌入允許您的模型在測試集中的單詞上工作,這些單詞甚至可能不會出現在您的訓練集中。
  • Keras(以及大多數其他深度學習框架)中的訓練序列模型需要一些重要細節:
    • 要使用小批量,序列需要填充,以便小批量中的所有示例具有相同的長度。
    • Embedding()圖層可以使用預訓練值進行初始化。 這些值可以是固定的,也可以是在數據集上進一步訓練的。 但是,如果您標記的數據集很小,則通常不值得嘗試訓練大量預先訓練好的嵌入。
    • LSTM()有一個名為return_sequences的標志來決定是否要返回每個隱藏狀態或僅返回最后一個狀態。
    • 您可以在LSTM()后立即使用 Dropout()來調整您的網絡。

Congratulations on finishing this assignment and building an Emojifier. We hope you’re happy with what you’ve accomplished in this notebook!

??????

Acknowledgments

Thanks to Alison Darcy and the Woebot team for their advice on the creation of this assignment. Woebot is a chatbot friend that is ready to speak with you 24/7. As part of Woebot’s technology, it uses word embeddings to understand the emotions of what you say. You can play with it by going to http://woebot.io

總結

以上是生活随笔為你收集整理的Assignment | 05-week2 -Part_2-Emojify!的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

免费在线播放av电影 | 亚州精品国产 | 黄色网址在线播放 | 三上悠亚一区二区在线观看 | 国产精品久久久久9999 | 首页中文字幕 | 日韩黄色中文字幕 | 久草视频免费在线观看 | 天天插天天干天天操 | 91精品国产福利 | 97人人超碰在线 | 国产一区在线不卡 | 91人人爱 | 天天天天天天天操 | 丝袜+亚洲+另类+欧美+变态 | 一区二区av | 日韩中文字幕亚洲一区二区va在线 | 久久99久久99精品免观看粉嫩 | 精品国产成人在线影院 | 亚洲一区网| 国产精品一区二区av日韩在线 | 日韩电影中文,亚洲精品乱码 | 在线中文字母电影观看 | 日韩精品免费在线视频 | 最近中文字幕久久 | 久久国产精品视频免费看 | 亚洲欧美国产视频 | 啪啪免费观看网站 | 免费精品视频在线观看 | 欧美一区二区视频97 | 欧美性成人| www.香蕉视频| 久久久午夜剧场 | 免费观看国产精品 | 亚洲最新视频在线播放 | 五月激情在线 | 亚洲精品黄网站 | 开心婷婷色 | 天堂在线成人 | 日韩视频区 | 日韩在线免费观看视频 | 欧美一二三视频 | 亚洲 精品在线视频 | 五月婷婷在线综合 | 亚洲免费专区 | 国产综合在线观看视频 | 国产成人精品综合久久久久99 | 亚洲精品看片 | 日韩在线免费播放 | 免费一级片在线 | 精品一二三区视频 | 在线观看色网站 | 伊人婷婷 | 丁香激情五月婷婷 | 狠狠干五月天 | 久久午夜精品视频 | 亚洲精品一区二区在线观看 | 一区二区精品在线 | 色先锋资源网 | 在线国产日韩 | 亚洲综合视频在线 | 久久精品99精品国产香蕉 | 五月婷综合 | 夜夜躁狠狠躁日日躁 | 午夜视频一区二区三区 | 日韩天天操 | 黄色大片av | 亚洲激情婷婷 | 胖bbbb搡bbbb擦bbbb| 国产精品一区二区三区免费看 | 黄色成人影视 | 国内精品中文字幕 | 成人av在线看 | 日韩在线高清免费视频 | 国产精品成人a免费观看 | 日本巨乳在线 | 欧美色图另类 | 99精品视频在线观看视频 | 一区二区 不卡 | 婷婷免费在线视频 | 日日爱影视 | 丁香色天天 | 一区二区三区中文字幕在线观看 | www色网站 | 狠狠狠干狠狠 | 久久艹99| 欧美精品久久久久久久久免 | 九九视频精品免费 | 成人小电影在线看 | 亚州精品一二三区 | 夜夜澡人模人人添人人看 | 欧美国产日韩一区二区三区 | 国产亚洲精品久久久久久久久久 | av福利在线 | 久久丁香 | 精品视频国产 | 五月天.com | 国产污视频在线观看 | av免费在线看网站 | 干亚洲少妇 | 亚洲精品色婷婷 | 亚洲精品乱码久久久久久9色 | 欧美久久久久久久久中文字幕 | 久久综合久久综合九色 | 国产色女 | 亚洲va欧洲va国产va不卡 | 欧美乱淫视频 | 天天躁天天操 | 婷婷色中文字幕 | 久久婷婷国产 | 日三级在线 | 在线免费观看成人 | 久久手机免费视频 | 亚洲理论视频 | 亚洲,播放 | 国产精品18p | 九九久久国产 | www久久| 久久一级电影 | 婷婷五天天在线视频 | 欧美国产日韩在线观看 | 九七视频在线观看 | 国产午夜精品免费一区二区三区视频 | 中文字幕在线日 | 久久婷婷色综合 | 久久96国产精品久久99软件 | 欧美一级视频在线观看 | 国产成人精品久久亚洲高清不卡 | 国产96精品 | 久久久久伦理电影 | 国产一区二区在线免费 | 亚洲精品国产精品国自产 | 高清不卡毛片 | 91福利在线导航 | av中文字幕亚洲 | 婷婷激情五月 | 97视频免费 | 精品久久久久久亚洲综合网站 | 91九色视频在线播放 | 综合色亚洲| 97视频在线观看播放 | 国产高清视频在线观看 | 日韩视频a | 黄色免费网战 | 视频在线播放国产 | 四虎影视8848aamm | 久久久国产精品一区二区三区 | 7777xxxx| 成年人免费av | 国产精品久久久久久999 | 国产尤物在线 | 狠狠干网站 | 伊人色综合久久天天网 | 日韩videos高潮hd | 久久国语露脸国产精品电影 | 日韩精品高清不卡 | 超碰电影在线观看 | 四虎影视成人永久免费观看亚洲欧美 | 久久免费精品一区二区三区 | 亚洲激情六月 | 成人一级影视 | 久久激情五月丁香伊人 | 中文在线免费看视频 | 久久久久久高潮国产精品视 | 日韩a级黄色 | 探花系列在线 | 99爱这里只有精品 | 激情久久伊人 | 国产97视频在线 | 人人爱爱 | 97精品国产97久久久久久久久久久久 | 久影院 | 久久精品日产第一区二区三区乱码 | 中文字幕色网站 | 97av影院 | 日日摸日日| 在线观看的黄色 | 日本激情中文字幕 | 91免费高清观看 | 超碰九九| 婷婷av网| 久久精品视频网站 | 毛片基地黄久久久久久天堂 | av免费观看网址 | 蜜臀av在线一区二区三区 | 精品福利视频在线观看 | 久久国产视屏 | 最近字幕在线观看第一季 | 黄网站免费大全入口 | 午夜三级大片 | 曰韩精品 | 超碰人在线| 久久久久国产成人免费精品免费 | 91综合久久一区二区 | www天天干| 丁香午夜 | 欧美精品久久 | 激情五月开心 | 97av在线视频免费播放 | 日韩中文字幕第一页 | a天堂最新版中文在线地址 久久99久久精品国产 | 91资源在线观看 | 国产va饥渴难耐女保洁员在线观看 | 波多野结衣电影一区二区 | 91精品一| 亚洲午夜久久久久久久久久久 | 久久国产手机看片 | 日韩激情视频在线 | 日韩在线视频免费播放 | 色偷偷av男人天堂 | 久久久久久久久精 | 在线亚洲高清视频 | 日韩欧美久久 | 久久综合久色欧美综合狠狠 | 国产一级免费片 | 国产一二三区av | 国内精品久久久久久久久久久 | 欧美激情视频免费看 | 99在线精品免费视频九九视 | 久久成人国产精品免费软件 | 91精品国产自产在线观看 | 日韩r级电影在线观看 | 九九在线高清精品视频 | 在线综合 亚洲 欧美在线视频 | 五月导航 | 九九九热精品免费视频观看 | 精品国产乱码久久久久久三级人 | 在线视频久久 | 亚洲欧美日韩国产 | 天堂麻豆 | 亚洲美女久久 | 久久综合免费视频影院 | av大全在线免费观看 | 国内精品久久久久久中文字幕 | 成人午夜电影在线 | 国产精品免费看久久久8精臀av | 欧美怡红院视频 | 天天综合狠狠精品 | 天天射天天爱天天干 | 日本高清免费中文字幕 | 国产啊v在线观看 | 久久久国产电影 | 国产尤物在线观看 | 黄色录像av| 日日干天夜夜 | 亚洲高清在线精品 | avav片| 日本三级全黄少妇三2023 | 国产精品第72页 | 国产色久| 国产成人精品av久久 | 99视频| 丁香五月亚洲综合在线 | 久热国产视频 | 91成人免费电影 | 一二三四精品 | 久久免费视频一区 | 免费看黄的视频 | 在线超碰av| 天天操天天操天天操天天操天天操 | 国产精品专区在线 | 在线观看亚洲a | 国产 一区二区三区 在线 | 成人a级免费视频 | 国产亚洲成人网 | 色的网站在线观看 | 成人免费视频网站 | 久久成人高清视频 | 日韩视频1区 | 91在线产啪 | 国产美女精彩久久 | 中文字幕麻豆 | 日韩高清免费在线观看 | 国产美女永久免费 | 国产精品系列在线 | free. 性欧美.com| 日韩免费观看高清 | 久草香蕉在线视频 | 波多野结衣视频一区二区 | 久久伊人婷婷 | 麻豆成人在线观看 | av网站播放 | 国产精品va在线观看入 | 日韩精品三区四区 | 久久99久国产精品黄毛片入口 | 亚洲午夜精品一区二区三区电影院 | 99视频在线精品免费观看2 | 91麻豆精品国产91久久久无限制版 | 欧美性粗大hdvideo | 日韩视频二区 | 免费福利片2019潦草影视午夜 | 国产精品成人免费精品自在线观看 | av观看在线观看 | 亚州性色 | 福利电影久久 | 国产成人精品一区二区三区在线观看 | 国产日产在线观看 | 天天干夜夜操视频 | 国产传媒一区在线 | 开心婷婷色 | 国产精品一区在线观看 | 成 人 黄 色 视频 免费观看 | 手机版av在线 | 在线观看你懂的网站 | 日韩有码专区 | 国产综合在线观看视频 | 国产精品观看视频 | 国产一区二区三区免费在线 | 亚洲欧美激情精品一区二区 | 久草视频在线免费 | 蜜桃av综合网 | 亚洲亚洲精品在线观看 | 色婷婷骚婷婷 | 欧美aⅴ在线观看 | 最近av在线 | 国产精品 中文字幕 亚洲 欧美 | 91热这里只有精品 | 日韩美av在线 | 99久久婷婷国产 | 亚洲无吗天堂 | 波多野结衣电影一区二区三区 | 精品久久久久久久 | 香蕉影视在线观看 | 日日干夜夜操视频 | 麻豆视频国产在线观看 | 欧美性网站 | 中文字幕第一页av | 亚洲aⅴ一区二区三区 | 911香蕉视频 | 天天久久夜夜 | 99久久精品免费看国产 | 久久久久影视 | 国产精品国产三级国产aⅴ9色 | 日本中文乱码卡一卡二新区 | 天天看天天干 | 在线黄网站 | 国产破处在线播放 | 毛片永久新网址首页 | 视频在线99| 人人爽人人爽人人爽学生一级 | 欧美精品亚洲精品日韩精品 | 国产精品久久久久影院 | 国产第一二区 | 日本性生活免费看 | 国产99久久久国产精品 | 欧美乱大交 | 国产91精品一区二区 | 在线看片成人 | 国产精品毛片一区二区 | 久久国产免费 | www黄色com | 国产在线国偷精品产拍 | 国内精品中文字幕 | 欧美成年网站 | av免费试看 | 久久精品99久久 | 五月婷婷天堂 | 日韩.com| 久草在线观 | 激情片av| 久久国产系列 | 亚洲一区日韩精品 | 久久狠狠干 | 国产美女在线免费观看 | 精品久久影院 | 日韩videos高潮hd | 香蕉视频国产在线观看 | 亚洲欧美一区二区三区孕妇写真 | 456成人精品影院 | 国产一区视频导航 | 97超碰人人网 | 国产精品热 | 久久久久久久免费观看 | 色91av | 在线观看国产日韩 | 久久久国产电影 | 免费看黄色小说的网站 | 久草在线视频免费资源观看 | 精品国产精品久久一区免费式 | 成 人 黄 色 片 在线播放 | 最新国产精品久久精品 | 天天碰天天操 | 日韩电影久久 | 久久老司机精品视频 | 国产高清视频免费最新在线 | 香蕉在线视频观看 | 国产午夜精品福利视频 | 午夜久久久久久久久久影院 | 91视频最新网址 | 欧美精品你懂的 | 一区二区三区中文字幕在线 | 9在线观看免费高清完整 | 久久精品国产久精国产 | 亚洲精品88欧美一区二区 | 天天干国产| 毛片基地黄久久久久久天堂 | 在线视频精品播放 | 99久久久成人国产精品 | 日韩精品偷拍 | 337p欧美| 国产精品情侣视频 | 中文字幕一区二区三区四区久久 | 久久精品电影院 | 免费色网站| 久久精品女人毛片国产 | 亚洲国产精品电影在线观看 | 日韩精品高清视频 | 天天人人 | 91麻豆免费看 | 我爱av激情网 | 黄色小说网站在线 | 免费成人av在线看 | 中文字幕精品一区二区精品 | 97中文字幕 | 日韩亚洲国产中文字幕 | 国产黄大片 | 黄色三级免费 | 欧洲在线免费视频 | 久久久免费精品国产一区二区 | 亚洲精品中文字幕视频 | 欧美性久久久 | 五月综合在线观看 | 99电影 | 91精品久久久久久久99蜜桃 | 中文字幕av免费观看 | 91av在线精品 | 久久这里只有精品9 | 一区二区精品视频 | 中文字幕在线观看的网站 | 国产精品免费一区二区三区在线观看 | 91天天操| 高清国产一区 | 91网在线 | 久久免费国产电影 | 免费a级观看 | 亚洲精品国产品国语在线 | av丁香花 | 99精品黄色片免费大全 | 亚洲国产精彩中文乱码av | 国产一二区免费视频 | 在线免费视频一区 | 国产伦理一区二区三区 | 久久久久女人精品毛片 | 爱av在线网| 99 视频 高清 | 天天干天天拍 | 久久精品免费观看 | 99爱国产精品 | 日韩久久久久久久久久 | 国产成人精品福利 | 亚洲成人精品av | aaa亚洲精品一二三区 | 成人av电影在线观看 | 国产午夜精品免费一区二区三区视频 | 在线观看一区二区精品 | 国产美女主播精品一区二区三区 | 欧美无极色 | a视频在线观看 | 亚洲天堂精品视频 | 99久久久久久国产精品 | 国产精品久久久久婷婷 | 久久99国产精品视频 | 久久精品一区二 | 国产精品久久久久久久久久久久午 | 亚洲国产中文字幕 | 国产精品岛国久久久久久久久红粉 | 国产精品国产自产拍高清av | 国产乱码精品一区二区三区介绍 | 日日夜夜婷婷 | 欧美一区日韩精品 | 日韩高清www | 毛片www | 天天色草 | 欧美精品久久久久a | 日韩免费视频播放 | 久久国产精品免费一区 | 激情视频国产 | 日韩视频1区 | 欧美最爽乱淫视频播放 | 亚洲影院色 | 欧美性色综合网站 | 精品久久久久久亚洲综合网站 | 天天综合网在线观看 | 日韩精品免费在线播放 | 欧美另类z0zx | 久久呀 | 久久精品影视 | 国产第一福利网 | 欧美黄色成人 | 国产精品久久久久久久av大片 | 九九在线视频免费观看 | 99精品在线免费在线观看 | 婷婷久久久 | 高清不卡毛片 | 日韩三级免费观看 | 一级黄视频 | 国产精品不卡一区 | 在线视频一二三 | 久久久国产精品电影 | 在线一二三区 | 日韩免费b| 久久亚洲区 | 玖玖玖在线观看 | 国产黄av | 亚洲精品国产精品国自产 | 国产精品亚州 | 国产亚洲婷婷免费 | 高清日韩一区二区 | 日韩激情视频在线 | 99久热在线精品 | 精品电影一区 | 午夜久久福利 | 激情中文字幕 | 国产91丝袜在线播放动漫 | www欧美色 | www.久久99 | 中文字幕av一区二区三区四区 | 在线免费三级 | 成人91免费视频 | 免费99| 伊人va| 国产亚洲精品久久久久动 | 久久久久久久影视 | 国产在线观看高清视频 | 国产高清久久久久 | 久久久亚洲成人 | 91成人在线视频观看 | 亚洲高清91 | 久久精品美女 | 日韩精品视频网站 | 91高清免费在线观看 | 99re6热在线精品视频 | 懂色av一区二区三区蜜臀 | 高清av影院 | 久久久久人人 | www.天天综合 | 怡红院成人在线 | 久久99热这里只有精品国产 | 久久久久激情视频 | 91精品视频播放 | 日韩精品久久久久久久电影竹菊 | www.777奇米| 久久久黄视频 | 亚洲女人天堂成人av在线 | 久热这里有精品 | 亚洲精品字幕在线观看 | 国产99色 | 亚洲三级视频 | 人人添人人澡 | 国产精品久久久久久电影 | 日韩av网站在线播放 | 日韩资源视频 | 日韩av黄 | 500部大龄熟乱视频使用方法 | 中文字幕高清免费日韩视频在线 | 精品国产乱码久久久久久1区二区 | 国产一级免费在线观看 | 午夜视频在线观看一区二区 | 亚洲国内精品在线 | 亚洲va综合va国产va中文 | 九九九九九精品 | 欧美激情精品久久久久久 | 成人av动漫在线 | 久久久美女 | 欧美激情综合网 | 日本黄色大片免费看 | 久久精品视频在线看 | 午夜色大片在线观看 | 日韩在线观看中文字幕 | 日本黄色免费大片 | 丰满少妇在线观看 | 久久三级毛片 | av看片网址 | 91黄色在线观看 | 国产日产在线观看 | 爱色av.com| freejavvideo日本免费 | 在线91视频 | 免费看的黄色的网站 | 91成人短视频在线观看 | 国偷自产视频一区二区久 | 久草在线 | 成人黄色在线观看视频 | 午夜精品一区二区三区在线播放 | 欧美性黄网官网 | 中文字幕在线国产 | 亚洲国产精品va在线看黑人 | 毛片www| 国产精品手机在线观看 | 亚洲午夜久久久影院 | 亚洲天天干| 一本一道久久a久久综合蜜桃 | 中文永久字幕 | 国产一区二区三精品久久久无广告 | 精品黄色在线 | 亚洲一级国产 | 九色91视频| 国产精品久久久久久一区二区三区 | 国产a级免费 | av在线播放国产 | 日本护士三级少妇三级999 | 天天干一干 | 高清日韩一区二区 | 久久久网址 | 正在播放 久久 | 欧美日本中文字幕 | 久久精品亚洲 | 81精品国产乱码久久久久久 | 三级动态视频在线观看 | 干综合网 | 亚洲资源视频 | av在线播放国产 | 色视频网站免费观看 | 久久久精品国产一区二区三区 | 天天天天干 | 麻豆国产精品va在线观看不卡 | 免费看片日韩 | 波多野结衣在线中文字幕 | 欧美日韩免费一区二区三区 | 日韩在线播放欧美字幕 | 欧美淫aaa免费观看 日韩激情免费视频 | 成人亚洲精品国产www | 婷婷六月天在线 | 精品国产三级a∨在线欧美 免费一级片在线观看 | 久久麻豆视频 | 808电影免费观看三年 | 麻豆国产视频 | 五月婷婷中文网 | 午夜国产在线观看 | 91免费日韩 | 亚欧日韩成人h片 | 深夜成人av | 婷婷在线综合 | 欧美一级电影在线观看 | 国产九九九九九 | 国产一区二区在线免费视频 | 一区二区久久 | 涩av在线| 天无日天天操天天干 | 亚洲精品色视频 | 国产精品国产毛片 | 激情网五月天 | 中文字幕免费一区 | 久久另类视频 | 综合国产在线观看 | 黄色资源网站 | 亚洲黄色三级 | 日日久视频| 日日干影院 | 国产在线999| 在线播放91 | 久久精品美女视频网站 | 精品国产网址 | 日日夜夜天天久久 | 激情电影影院 | 午夜影视av| 黄色毛片视频 | 国产精品淫片 | 久久五月婷婷综合 | av免费在线观看网站 | 在线av资源 | 91福利在线导航 | 国产精品免费久久久 | 国产精品久久人 | 91传媒激情理伦片 | 国产精品久久久久久久久费观看 | 日韩在线视频网站 | 国产福利小视频在线 | 日本成人免费在线观看 | 午夜久久久影院 | 国产日本在线 | 欧美一级xxxx| 999精品在线| 国产一区高清在线 | 久久黄色网址 | 欧美精品在线一区二区 | 天堂在线成人 | 国产1区在线观看 | 97超碰在线资源 | 日韩欧美69 | 久久久伦理| 国产精品xxxx18a99 | 91视频久久久 | 在线不卡视频 | 日操操| 国产精品美女久久久久久2018 | 日韩在线视频免费播放 | 国产乱老熟视频网88av | 日本一区二区不卡高清 | 在线看v片成人 | 中文字幕资源在线 | 久亚洲| 国产97色| 欧美精品久久人人躁人人爽 | 亚洲成人av片 | 99精品国产亚洲 | 伊人看片 | 免费在线成人 | 久久专区 | 精品一区二区免费 | 91成人网在线观看 | 亚洲精品国产精品乱码在线观看 | 欧美日韩另类在线观看 | 成人午夜影院 | 日本精品久久 | 91av视频 | 欧美性大胆 | av午夜电影 | 成人影片在线播放 | 国产在线观看 | 久久成视频 | 97超级碰碰碰碰久久久久 | 色七七亚洲影院 | 国产中文字幕亚洲 | 国产亚洲精品中文字幕 | 国产电影一区二区三区四区 | 九九在线免费视频 | 亚洲综合涩 | 97视频人人澡人人爽 | 久久久免费视频播放 | 亚洲精品高清在线观看 | 婷婷综合五月天 | 中文字幕亚洲字幕 | 日本久久高清视频 | 日日碰狠狠躁久久躁综合网 | 免费在线观看av网址 | 亚洲老妇xxxxxx | 成人欧美亚洲 | 国产午夜不卡 | 亚洲视频在线免费观看 | 91超碰在线播放 | 看黄色.com| 亚洲在线看 | 色婷婷伊人 | 国产精品一区欧美 | 亚洲成人av片 | www.夜夜操.com | 最近2019好看的中文字幕免费 | 丝袜制服天堂 | 欧美另类tv | 国产我不卡 | 在线观看免费av片 | 久久精品系列 | 激情av一区二区 | 国产一级视屏 | 99精品久久久久久久 | 欧美日韩精品在线观看 | 综合天天色 | 国产成人免费在线 | 精品美女视频 | 97精品国产一二三产区 | 一区二区三区日韩精品 | 国产精品一区二区av影院萌芽 | 欧美做受69 | 中文字幕在线观看第二页 | 国产剧情一区二区在线观看 | 右手影院亚洲欧美 | 免费三级大片 | 国产福利一区二区三区在线观看 | 欧美激情视频免费看 | 久久一区二区三区日韩 | 精品国产一区在线观看 | 久久综合免费视频影院 | 日韩av不卡在线播放 | 国产黄色片免费观看 | 国产91免费在线 | 日本视频网 | 国产国产人免费人成免费视频 | 黄色毛片视频 | 成人小视频在线观看免费 | 在线一区观看 | 六月丁香婷 | 国产女人40精品一区毛片视频 | 激情网站 | 欧美国产日韩一区二区三区 | 日韩免费看 | 日韩小视频网站 | 日韩有码欧美 | 91喷水 | 国产手机在线精品 | 国产小视频免费在线网址 | 91成人区| 日本在线观看黄色 | 黄色tv视频 | 在线观看av不卡 | 国产又黄又猛又粗 | 久久99精品国产99久久 | 人人爽久久久噜噜噜电影 | 久久婷婷五月综合色丁香 | 精品国产中文字幕 | 久久成人精品电影 | 国产毛片久久久 | 天天射天天爽 | 一本一道久久a久久综合蜜桃 | 91最新视频在线观看 | 最近2019好看的中文字幕免费 | 久草网在线观看 | 久久不卡国产精品一区二区 | 亚洲精品美女在线 | 日韩免费视频线观看 | 日本性久久 | 婷婷香蕉 | 国产一级视频在线观看 | 天堂视频中文在线 | 日韩免费播放 | 一区二区三区在线播放 | 国产精品久久久久久久久久久杏吧 | 亚洲人成免费 | 日女人电影 | 日韩精品一区电影 | 免费在线一区二区三区 | 美女在线免费视频 | 99人成在线观看视频 | 久久国产精品99久久人人澡 | 欧美一级看片 | av亚洲产国偷v产偷v自拍小说 | 国内一区二区视频 | 婷婷视频在线观看 | 99久久国产免费,99久久国产免费大片 | 国产91精品一区二区麻豆网站 | 亚洲自拍偷拍色图 | 91在线精品视频 | 国产精品综合av一区二区国产馆 | 日韩欧美高清一区二区 | 国产精品久久久久久久婷婷 | 91九色蝌蚪国产 | 婷婷干五月 | 99高清视频有精品视频 | 久草精品视频在线看网站免费 | 日本中文字幕在线免费观看 | 亚洲欧美日韩国产一区二区三区 | 免费在线看成人av | 亚洲一区二区高潮无套美女 | 久久精选 | 人人爱人人做人人爽 | 精品在线视频一区二区三区 | 国产不卡精品视频 | av电影免费看| 国产午夜精品久久久久久久久久 | 成人免费在线观看电影 | 欧美色久 | 国产精品理论片在线播放 | 日韩欧美一区二区三区黑寡妇 | 福利精品在线 | 久久99国产精品久久99 | av黄色在线观看 | 91精品入口 | 久久99精品国产99久久6尤 | 亚洲精品国偷自产在线91正片 | 欧美巨大荫蒂茸毛毛人妖 | 日韩一区在线免费观看 | 天海翼一区二区三区免费 | 911av视频 | 精品国产一区二区三区噜噜噜 | 久久久久久久久福利 | 五月婷婷色 | 日韩一级黄色av | 日韩影片在线观看 | 精品99免费视频 | 天堂av在线免费观看 | 久久99九九99精品 | 国产美女网站在线观看 | 天堂视频一区 | www蜜桃视频 | 三级午夜片 | 黄色视屏免费在线观看 | 久久成人人人人精品欧 | 午夜影院在线观看18 | 精品视频一区在线观看 | 色爱成人网 | 久久综合九色欧美综合狠狠 | 国产天天爽 | 一二三精品视频 | 欧美最新大片在线看 | 色在线视频网 | 精品欧美一区二区在线观看 | www免费在线观看 | 国产精品不卡在线观看 | 国产精品剧情 | 91福利国产在线观看 | 狠狠狠色狠狠色综合 | 欧洲色综合 | 精品国产免费看 | 天天射网 | 日韩精品中文字幕在线观看 | 国产成人久久久久 | 九九久久久 | 精品亚洲一区二区 | 黄色视屏在线免费观看 | 91精品国产成人观看 | 人成午夜视频 | 免费人成在线观看网站 | 岛国av在线不卡 | 午夜123| 精品一区二区精品 | 亚洲精品tv | 欧美日韩国产亚洲乱码字幕 | 91亚洲视频在线观看 | 国产中年夫妇高潮精品视频 | 国产成人中文字幕 | 999ZYZ玖玖资源站永久 | 亚洲激情视频 | 久久99久久99精品中文字幕 | 久久这里有精品 | 蜜臀av在线一区二区三区 | 日韩手机在线观看 | 99久高清在线观看视频99精品热在线观看视频 | 香蕉手机在线 | 精品美女久久久久久免费 | 91最新视频在线观看 | 欧美精品天堂 | 国产精品一区二区三区在线播放 | 91av手机在线 | 2019免费中文字幕 | 波多野结衣视频一区二区三区 | japanesefreesex中国少妇 | 最新精品国产 | 亚洲永久字幕 | 天天天天天干 | 韩国av免费 | 男女男视频 | 在线日韩精品视频 | 精品免费视频 | 天天操夜夜曰 | 视频高清| 国产麻豆视频 | 福利一区二区在线 | 国产一区二区精品在线 | 精品国产成人在线 | 天天做日日做天天爽视频免费 | 日韩精品专区在线影院重磅 | 色综合久久精品 | 国产精品麻豆三级一区视频 | 中文字幕免费高清av | 国产精品孕妇 | 久久午夜视频 | 在线不卡中文字幕播放 | 国产精品色婷婷 | 五月激情丁香 | 亚洲三级av | 国产成人久久精品一区二区三区 | 麻豆视频网址 | 日韩欧美在线免费观看 | 久久国产精品二国产精品中国洋人 | 成 人 黄 色视频免费播放 | 二区中文字幕 | 在线精品国产 | 亚洲三区在线 | 国产1区在线 | 美女免费电影 | 激情五月看片 | 亚洲欧洲中文日韩久久av乱码 | 免费成人在线观看视频 | jizz999| 91成人在线看 | 欧美专区日韩专区 | 黄色一级在线免费观看 | 婷婷色亚洲 | 国产三级在线播放 | 日本精品一区二区 | 国产在线a视频 | 狠狠干婷婷 | 人人爽人人爽人人 | 国产精品免费高清 | 久久视频二区 | 天天插视频 | 四虎在线观看网址 | 久久夜夜夜| 国产日韩在线播放 | 国产免费影院 | 国产丝袜制服在线 | 中文字幕在线观看的网站 | 韩日成人av | 欧美日本在线视频 | 免费观看成人 | 亚洲午夜久久久久久久久 | 日韩精品一区二区三区免费视频观看 | www国产在线 | 国产亲近乱来精品 | 国产日产精品一区二区三区四区的观看方式 | 国内丰满少妇猛烈精品播放 | 蜜桃麻豆www久久囤产精品 | 在线一区av| 亚洲精品中文在线 | 日韩av黄| 综合久久网 | 开心激情网五月天 | 五月婷婷狠狠 | 婷婷5月色 | 国内视频在线观看 | 九月婷婷人人澡人人添人人爽 | 婷婷丁香狠狠爱 | 久久99亚洲网美利坚合众国 | 99精品一区 | 99精彩视频在线观看免费 | 99久久激情视频 | 欧美日韩视频在线一区 | 三级黄色免费 |