日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人工智能 > pytorch >内容正文

pytorch

吴恩达深度学习课程deeplearning.ai课程作业:Class 4 Week 3 Car detection

發布時間:2025/3/21 pytorch 38 豆豆
生活随笔 收集整理的這篇文章主要介紹了 吴恩达深度学习课程deeplearning.ai课程作业:Class 4 Week 3 Car detection 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

吳恩達deeplearning.ai課程作業,自己寫的答案。
補充說明:
1. 評論中總有人問為什么直接復制這些notebook運行不了?請不要直接復制粘貼,不可能運行通過的,這個只是notebook中我們要自己寫的那部分,要正確運行還需要其他py文件,請自己到GitHub上下載完整的。這里的部分僅僅是參考用的,建議還是自己按照提示一點一點寫,如果實在卡住了再看答案。個人覺得這樣才是正確的學習方法,況且作業也不算難。
2. 關于評論中有人說我是抄襲,注釋還沒別人詳細,復制下來還運行不過。答復是:做伸手黨之前,請先搞清這個作業是干什么的。大家都是從GitHub上下載原始的作業,然后根據代碼前面的提示(通常會指定函數和公式)來編寫代碼,而且后面還有expected output供你比對,如果程序正確,結果一般來說是一樣的。請不要無腦噴,說什么跟別人的答案一樣的。說到底,我們要做的就是,看他的文字部分,根據提示在代碼中加入部分自己的代碼。我們自己要寫的部分只有那么一小部分代碼。
3. 由于實在很反感無腦噴子,故禁止了下面的評論功能,請見諒。如果有問題,請私信我,在力所能及的范圍內會盡量幫忙。

準備工作:

這一課的主要實踐的是yolo算法,后面會用到一個yolo.h5模型。這個模型需要我們自己到yolo官方網站上下載,制作出來h5格式的模型供python讀取。
在github上給出了一些步驟:(鏈接:https://github.com/allanzelener/YAD2K)
如果嫌自己制作太麻煩,請直接右上角轉百度云。
我制作好的yolo.h5文件的百度云鏈接:
鏈接:https://pan.baidu.com/s/1dGbyycT 密碼:xgr2

配置環境


他用到了anaconda,啟動了一個新的環境(yad2k)。
直接使用默認的python環境也是一樣的,但是一定要是python3。

下載已經訓練好的模型的cfg和weights文件:

wget http://pjreddie.com/media/files/yolo.weights wget https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolo.cfg

生成我們想要的h5文件

由于程序中用的是keras框架,要轉成h5文件后,才能讀進去。

python3 yad2k.py yolo.cfg yolo.weights model_data/yolo.h5

Autonomous driving - Car detection

Welcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: Redmon et al., 2016 (https://arxiv.org/abs/1506.02640) and Redmon and Farhadi, 2016 (https://arxiv.org/abs/1612.08242).

You will learn to:
- Use object detection on a car detection dataset
- Deal with bounding boxes

Run the following cell to load the packages and dependencies that are going to be useful for your journey!

import argparse import os import matplotlib.pyplot as plt from matplotlib.pyplot import imshow import scipy.io import scipy.misc import numpy as np import pandas as pd import PIL import tensorflow as tf from keras import backend as K from keras.layers import Input, Lambda, Conv2D from keras.models import load_model, Model from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body%matplotlib inline Using TensorFlow backend.

Important Note: As you can see, we import Keras’s backend as K. This means that to use a Keras function in this notebook, you will need to write: K.function(...).

1 - Problem Statement

You are working on a self-driving car. As a critical component of this project, you’d like to first build a car detection system. To collect data, you’ve mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around.



Pictures taken from a car-mounted camera while driving around Silicon Valley.
We would like to especially thank drive.ai for providing this dataset! Drive.ai is a company building the brains of self-driving vehicles.

You’ve gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here’s an example of what your bounding boxes look like.


Figure 1 : Definition of a box

If you have 80 classes that you want YOLO to recognize, you can represent the class label cc either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step.

In this exercise, you will learn how YOLO works, then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use.

2 - YOLO

YOLO (“you only look once”) is a popular algoritm because it achieves high accuracy while also being able to run in real-time. This algorithm “only looks once” at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes.

2.1 - Model details

First things to know:
- The input is a batch of images of shape (m, 608, 608, 3)
- The output is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers (pc,bx,by,bh,bw,c)(pc,bx,by,bh,bw,c) as explained above. If you expand cc into an 80-dimensional vector, each bounding box is then represented by 85 numbers.

We will use 5 anchor boxes. So you can think of the YOLO architecture as the following: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85).

Lets look in greater detail at what this encoding represents.

Figure 2 : Encoding architecture for YOLO

If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object.

Since we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.

For simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425).


Figure 3 : Flattening the last two last dimensions

Now, for each box (of each cell) we will compute the following elementwise product and extract a probability that the box contains a certain class.

Figure 4 : Find the class detected by each box

Here’s one way to visualize what YOLO is predicting on an image:
- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across both the 5 anchor boxes and across different classes).
- Color that grid cell according to what object that grid cell considers the most likely.

Doing this results in this picture:


Figure 5 : Each of the 19x19 grid cells colored according to which class has the largest predicted probability in that cell.

Note that this visualization isn’t a core part of the YOLO algorithm itself for making predictions; it’s just a nice way of visualizing an intermediate result of the algorithm.

Another way to visualize YOLO’s output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this:


Figure 6 : Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes.

In the figure above, we plotted only boxes that the model had assigned a high probability to, but this is still too many boxes. You’d like to filter the algorithm’s output down to a much smaller number of detected objects. To do so, you’ll use non-max suppression. Specifically, you’ll carry out these steps:
- Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class)
- Select only one box when several boxes overlap with each other and detect the same object.

2.2 - Filtering with a threshold on class scores

You are going to apply a first filter by thresholding. You would like to get rid of any box for which the class “score” is less than a chosen threshold.

The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It’ll be convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables:
- box_confidence: tensor of shape (19×19,5,1)(19×19,5,1) containing pcpc (confidence probability that there’s some object) for each of the 5 boxes predicted in each of the 19x19 cells.
- boxes: tensor of shape (19×19,5,4)(19×19,5,4) containing (bx,by,bh,bw)(bx,by,bh,bw) for each of the 5 boxes per cell.
- box_class_probs: tensor of shape (19×19,5,80)(19×19,5,80) containing the detection probabilities (c1,c2,...c80)(c1,c2,...c80) for each of the 80 classes for each of the 5 boxes per cell.

Exercise: Implement yolo_filter_boxes().
1. Compute box scores by doing the elementwise product as described in Figure 4. The following code may help you choose the right operator:

a = np.random.randn(19*19, 5, 1) b = np.random.randn(19*19, 5, 80) c = a * b # shape of c will be (19*19, 5, 80)
  • For each box, find:
    • the index of the class with the maximum box score (Hint) (Be careful with what axis you choose; consider using axis=-1)
    • the corresponding box score (Hint) (Be careful with what axis you choose; consider using axis=-1)
  • Create a mask by using a threshold. As a reminder: ([0.9, 0.3, 0.4, 0.5, 0.1] < 0.4) returns: [False, True, False, False, True]. The mask should be True for the boxes you want to keep.
  • Use TensorFlow to apply the mask to box_class_scores, boxes and box_classes to filter out the boxes we don’t want. You should be left with just the subset of boxes you want to keep. (Hint)
  • Reminder: to call a Keras function, you should use K.function(...).

    # GRADED FUNCTION: yolo_filter_boxesdef yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):"""Filters YOLO boxes by thresholding on object and class confidence.Arguments:box_confidence -- tensor of shape (19, 19, 5, 1)boxes -- tensor of shape (19, 19, 5, 4)box_class_probs -- tensor of shape (19, 19, 5, 80)threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding boxReturns:scores -- tensor of shape (None,), containing the class probability score for selected boxesboxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxesclasses -- tensor of shape (None,), containing the index of the class detected by the selected boxesNote: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold. For example, the actual output size of scores would be (10,) if there are 10 boxes."""# Step 1: Compute box scores### START CODE HERE ### (≈ 1 line)box_scores = box_confidence * box_class_probs### END CODE HERE #### Step 2: Find the box_classes thanks to the max box_scores, keep track of the corresponding score### START CODE HERE ### (≈ 2 lines)box_classes = K.argmax(box_scores, axis=-1)box_class_scores = K.max(box_scores, axis=-1, keepdims=False) # print(box_classes.shape) # print(box_class_scores.shape)### END CODE HERE #### Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the# same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)### START CODE HERE ### (≈ 1 line)filtering_mask = box_class_scores >= threshold### END CODE HERE #### Step 4: Apply the mask to scores, boxes and classes### START CODE HERE ### (≈ 3 lines) # print(box_class_scores.shape) # print(filtering_mask.shape)scores = tf.boolean_mask(box_class_scores, filtering_mask)boxes = tf.boolean_mask(boxes, filtering_mask)classes = tf.boolean_mask(box_classes, filtering_mask)### END CODE HERE ###return scores, boxes, classes with tf.Session() as test_a:box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)print("scores[2] = " + str(scores[2].eval()))print("boxes[2] = " + str(boxes[2].eval()))print("classes[2] = " + str(classes[2].eval()))print("scores.shape = " + str(scores.shape))print("boxes.shape = " + str(boxes.shape))print("classes.shape = " + str(classes.shape)) scores[2] = 10.7506 boxes[2] = [ 8.42653275 3.27136683 -0.5313437 -4.94137383] classes[2] = 7 scores.shape = (?,) boxes.shape = (?, 4) classes.shape = (?,)

    Expected Output:

    scores[2] 10.7506
    boxes[2] [ 8.42653275 3.27136683 -0.5313437 -4.94137383]
    classes[2] 7
    scores.shape (?,)
    boxes.shape (?, 4)
    classes.shape (?,)

    2.3 - Non-max suppression

    Even after filtering by thresholding over the classes scores, you still end up a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS).


    Figure 7 : In this example, the model has predicted 3 cars, but it’s actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probabiliy) one of the 3 boxes.

    Non-max suppression uses the very important function called “Intersection over Union”, or IoU.


    Figure 8 : Definition of “Intersection over Union”.

    Exercise: Implement iou(). Some hints:
    - In this exercise only, we define a box using its two corners (upper left and lower right): (x1, y1, x2, y2) rather than the midpoint and height/width.
    - To calculate the area of a rectangle you need to multiply its height (y2 - y1) by its width (x2 - x1)
    - You’ll also need to find the coordinates (xi1, yi1, xi2, yi2) of the intersection of two boxes. Remember that:
    - xi1 = maximum of the x1 coordinates of the two boxes
    - yi1 = maximum of the y1 coordinates of the two boxes
    - xi2 = minimum of the x2 coordinates of the two boxes
    - yi2 = minimum of the y2 coordinates of the two boxes

    In this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) the lower-right corner.

    # GRADED FUNCTION: ioudef iou(box1, box2):"""Implement the intersection over union (IoU) between box1 and box2Arguments:box1 -- first box, list object with coordinates (x1, y1, x2, y2)box2 -- second box, list object with coordinates (x1, y1, x2, y2)"""# Calculate the (y1, x1, y2, x2) coordinates of the intersection of box1 and box2. Calculate its Area.### START CODE HERE ### (≈ 5 lines)xi1 = max(box1[0], box2[0])yi1 = max(box1[1], box2[1])xi2 = min(box1[2], box2[2])yi2 = min(box1[3], box2[3])inter_area = (xi2 - xi1) * (yi2 - yi1) # print(xi1, yi1, xi2, yi2) # print(inter_area)### END CODE HERE ### # Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)### START CODE HERE ### (≈ 3 lines)box1_area = (box1[2] - box1[0]) * (box1[3] - box1[1])box2_area = (box2[2] - box2[0]) * (box2[3] - box2[1])union_area = box1_area + box2_area - inter_area # print(union_area)### END CODE HERE #### compute the IoU### START CODE HERE ### (≈ 1 line)iou = float(inter_area) / float(union_area)### END CODE HERE ###return iou box1 = (2, 1, 4, 3) box2 = (1, 2, 3, 4) print("iou = " + str(iou(box1, box2))) iou = 0.14285714285714285

    Expected Output:

    iou = 0.14285714285714285

    You are now ready to implement non-max suppression. The key steps are:
    1. Select the box that has the highest score.
    2. Compute its overlap with all other boxes, and remove boxes that overlap it more than iou_threshold.
    3. Go back to step 1 and iterate until there’s no more boxes with a lower score than the current selected box.

    This will remove all boxes that have a large overlap with the selected boxes. Only the “best” boxes remain.

    Exercise: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don’t actually need to use your iou() implementation):
    - tf.image.non_max_suppression()
    - K.gather()

    # GRADED FUNCTION: yolo_non_max_suppressiondef yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):"""Applies Non-max suppression (NMS) to set of boxesArguments:scores -- tensor of shape (None,), output of yolo_filter_boxes()boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)classes -- tensor of shape (None,), output of yolo_filter_boxes()max_boxes -- integer, maximum number of predicted boxes you'd likeiou_threshold -- real value, "intersection over union" threshold used for NMS filteringReturns:scores -- tensor of shape (, None), predicted score for each boxboxes -- tensor of shape (4, None), predicted box coordinatesclasses -- tensor of shape (, None), predicted class for each boxNote: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that thisfunction will transpose the shapes of scores, boxes, classes. This is made for convenience."""max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor# Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep### START CODE HERE ### (≈ 1 line)nms_indices = tf.image.non_max_suppression(boxes, scores, max_boxes, iou_threshold)### END CODE HERE #### Use K.gather() to select only nms_indices from scores, boxes and classes### START CODE HERE ### (≈ 3 lines)scores = K.gather(scores, nms_indices)boxes = K.gather(boxes, nms_indices)classes = K.gather(classes, nms_indices)### END CODE HERE ###return scores, boxes, classes with tf.Session() as test_b:scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)print("scores[2] = " + str(scores[2].eval()))print("boxes[2] = " + str(boxes[2].eval()))print("classes[2] = " + str(classes[2].eval()))print("scores.shape = " + str(scores.eval().shape))print("boxes.shape = " + str(boxes.eval().shape))print("classes.shape = " + str(classes.eval().shape)) scores[2] = 6.9384 boxes[2] = [-5.299932 3.13798141 4.45036697 0.95942086] classes[2] = -2.24527 scores.shape = (10,) boxes.shape = (10, 4) classes.shape = (10,)

    Expected Output:

    scores[2] 6.9384
    boxes[2] [-5.299932 3.13798141 4.45036697 0.95942086]
    classes[2] -2.24527
    scores.shape (10,)
    boxes.shape (10, 4)
    classes.shape (10,)

    2.4 Wrapping up the filtering

    It’s time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you’ve just implemented.

    Exercise: Implement yolo_eval() which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There’s just one last implementational detail you have to know. There’re a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided):

    boxes = yolo_boxes_to_corners(box_xy, box_wh)

    which converts the yolo box coordinates (x,y,w,h) to box corners’ coordinates (x1, y1, x2, y2) to fit the input of yolo_filter_boxes

    boxes = scale_boxes(boxes, image_shape)

    YOLO’s network was trained to run on 608x608 images. If you are testing this data on a different size image–for example, the car detection dataset had 720x1280 images–this step rescales the boxes so that they can be plotted on top of the original 720x1280 image.

    Don’t worry about these two functions; we’ll show you where they need to be called.

    # GRADED FUNCTION: yolo_evaldef yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):"""Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.Arguments:yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:box_confidence: tensor of shape (None, 19, 19, 5, 1)box_xy: tensor of shape (None, 19, 19, 5, 2)box_wh: tensor of shape (None, 19, 19, 5, 2)box_class_probs: tensor of shape (None, 19, 19, 5, 80)image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)max_boxes -- integer, maximum number of predicted boxes you'd likescore_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding boxiou_threshold -- real value, "intersection over union" threshold used for NMS filteringReturns:scores -- tensor of shape (None, ), predicted score for each boxboxes -- tensor of shape (None, 4), predicted box coordinatesclasses -- tensor of shape (None,), predicted class for each box"""### START CODE HERE ### # Retrieve outputs of the YOLO model (≈1 line)box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs# Convert boxes to be ready for filtering functions boxes = yolo_boxes_to_corners(box_xy, box_wh)# Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6)# Scale boxes back to original image shape.boxes = scale_boxes(boxes, image_shape)# Use one of the functions you've implemented to perform Non-max suppression with a threshold of iou_threshold (≈1 line)scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5)### END CODE HERE ###return scores, boxes, classes with tf.Session() as test_b:yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))scores, boxes, classes = yolo_eval(yolo_outputs)print("scores[2] = " + str(scores[2].eval()))print("boxes[2] = " + str(boxes[2].eval()))print("classes[2] = " + str(classes[2].eval()))print("scores.shape = " + str(scores.eval().shape))print("boxes.shape = " + str(boxes.eval().shape))print("classes.shape = " + str(classes.eval().shape)) scores[2] = 138.791 boxes[2] = [ 1292.32971191 -278.52166748 3876.98925781 -835.56494141] classes[2] = 54 scores.shape = (10,) boxes.shape = (10, 4) classes.shape = (10,)

    Expected Output:

    scores[2] 138.791
    boxes[2] [ 1292.32971191 -278.52166748 3876.98925781 -835.56494141]
    classes[2] 54
    scores.shape (10,)
    boxes.shape (10, 4)
    classes.shape (10,)


    Summary for YOLO:
    - Input image (608, 608, 3)
    - The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output.
    - After flattening the last two dimensions, the output is a volume of shape (19, 19, 425):
    - Each cell in a 19x19 grid over the input image gives 425 numbers.
    - 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture.
    - 85 = 5 + 80 where 5 is because (pc,bx,by,bh,bw)(pc,bx,by,bh,bw) has 5 numbers, and and 80 is the number of classes we’d like to detect
    - You then select only few boxes based on:
    - Score-thresholding: throw away boxes that have detected a class with a score less than the threshold
    - Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes
    - This gives you YOLO’s final output.

    3 - Test YOLO pretrained model on images

    In this part, you are going to use a pretrained model and test it on the car detection dataset. As usual, you start by creating a session to start your graph. Run the following cell.

    sess = K.get_session()

    3.1 - Defining classes, anchors and image shape.

    Recall that we are trying to detect 80 classes, and are using 5 anchor boxes. We have gathered the information about the 80 classes and 5 boxes in two files “coco_classes.txt” and “yolo_anchors.txt”. Let’s load these quantities into the model by running the next cell.

    The car detection dataset has 720x1280 images, which we’ve pre-processed into 608x608 images.

    class_names = read_classes("model_data/coco_classes.txt") anchors = read_anchors("model_data/yolo_anchors.txt") image_shape = (720., 1280.)

    3.2 - Loading a pretrained model

    Training a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. You are going to load an existing pretrained Keras YOLO model stored in “yolo.h5”. (These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the “YOLOv2” model, but we will more simply refer to it as “YOLO” in this notebook.) Run the cell below to load the model from this file.

    yolo_model = load_model("model_data/yolo.h5") /usr/local/lib/python3.5/dist-packages/keras/models.py:252: UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually.warnings.warn('No training configuration found in save file: '

    This loads the weights of a trained YOLO model. Here’s a summary of the layers your model contains.

    yolo_model.summary() __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) (None, 608, 608, 3) 0 __________________________________________________________________________________________________ conv2d_1 (Conv2D) (None, 608, 608, 32) 864 input_1[0][0] __________________________________________________________________________________________________ batch_normalization_1 (BatchNor (None, 608, 608, 32) 128 conv2d_1[0][0] __________________________________________________________________________________________________ leaky_re_lu_1 (LeakyReLU) (None, 608, 608, 32) 0 batch_normalization_1[0][0] __________________________________________________________________________________________________ max_pooling2d_1 (MaxPooling2D) (None, 304, 304, 32) 0 leaky_re_lu_1[0][0] __________________________________________________________________________________________________ conv2d_2 (Conv2D) (None, 304, 304, 64) 18432 max_pooling2d_1[0][0] __________________________________________________________________________________________________ batch_normalization_2 (BatchNor (None, 304, 304, 64) 256 conv2d_2[0][0] __________________________________________________________________________________________________ leaky_re_lu_2 (LeakyReLU) (None, 304, 304, 64) 0 batch_normalization_2[0][0] __________________________________________________________________________________________________ max_pooling2d_2 (MaxPooling2D) (None, 152, 152, 64) 0 leaky_re_lu_2[0][0] __________________________________________________________________________________________________ conv2d_3 (Conv2D) (None, 152, 152, 128 73728 max_pooling2d_2[0][0] __________________________________________________________________________________________________ batch_normalization_3 (BatchNor (None, 152, 152, 128 512 conv2d_3[0][0] __________________________________________________________________________________________________ leaky_re_lu_3 (LeakyReLU) (None, 152, 152, 128 0 batch_normalization_3[0][0] __________________________________________________________________________________________________ conv2d_4 (Conv2D) (None, 152, 152, 64) 8192 leaky_re_lu_3[0][0] __________________________________________________________________________________________________ batch_normalization_4 (BatchNor (None, 152, 152, 64) 256 conv2d_4[0][0] __________________________________________________________________________________________________ leaky_re_lu_4 (LeakyReLU) (None, 152, 152, 64) 0 batch_normalization_4[0][0] __________________________________________________________________________________________________ conv2d_5 (Conv2D) (None, 152, 152, 128 73728 leaky_re_lu_4[0][0] __________________________________________________________________________________________________ batch_normalization_5 (BatchNor (None, 152, 152, 128 512 conv2d_5[0][0] __________________________________________________________________________________________________ leaky_re_lu_5 (LeakyReLU) (None, 152, 152, 128 0 batch_normalization_5[0][0] __________________________________________________________________________________________________ max_pooling2d_3 (MaxPooling2D) (None, 76, 76, 128) 0 leaky_re_lu_5[0][0] __________________________________________________________________________________________________ conv2d_6 (Conv2D) (None, 76, 76, 256) 294912 max_pooling2d_3[0][0] __________________________________________________________________________________________________ batch_normalization_6 (BatchNor (None, 76, 76, 256) 1024 conv2d_6[0][0] __________________________________________________________________________________________________ leaky_re_lu_6 (LeakyReLU) (None, 76, 76, 256) 0 batch_normalization_6[0][0] __________________________________________________________________________________________________ conv2d_7 (Conv2D) (None, 76, 76, 128) 32768 leaky_re_lu_6[0][0] __________________________________________________________________________________________________ batch_normalization_7 (BatchNor (None, 76, 76, 128) 512 conv2d_7[0][0] __________________________________________________________________________________________________ leaky_re_lu_7 (LeakyReLU) (None, 76, 76, 128) 0 batch_normalization_7[0][0] __________________________________________________________________________________________________ conv2d_8 (Conv2D) (None, 76, 76, 256) 294912 leaky_re_lu_7[0][0] __________________________________________________________________________________________________ batch_normalization_8 (BatchNor (None, 76, 76, 256) 1024 conv2d_8[0][0] __________________________________________________________________________________________________ leaky_re_lu_8 (LeakyReLU) (None, 76, 76, 256) 0 batch_normalization_8[0][0] __________________________________________________________________________________________________ max_pooling2d_4 (MaxPooling2D) (None, 38, 38, 256) 0 leaky_re_lu_8[0][0] __________________________________________________________________________________________________ conv2d_9 (Conv2D) (None, 38, 38, 512) 1179648 max_pooling2d_4[0][0] __________________________________________________________________________________________________ batch_normalization_9 (BatchNor (None, 38, 38, 512) 2048 conv2d_9[0][0] __________________________________________________________________________________________________ leaky_re_lu_9 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_9[0][0] __________________________________________________________________________________________________ conv2d_10 (Conv2D) (None, 38, 38, 256) 131072 leaky_re_lu_9[0][0] __________________________________________________________________________________________________ batch_normalization_10 (BatchNo (None, 38, 38, 256) 1024 conv2d_10[0][0] __________________________________________________________________________________________________ leaky_re_lu_10 (LeakyReLU) (None, 38, 38, 256) 0 batch_normalization_10[0][0] __________________________________________________________________________________________________ conv2d_11 (Conv2D) (None, 38, 38, 512) 1179648 leaky_re_lu_10[0][0] __________________________________________________________________________________________________ batch_normalization_11 (BatchNo (None, 38, 38, 512) 2048 conv2d_11[0][0] __________________________________________________________________________________________________ leaky_re_lu_11 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_11[0][0] __________________________________________________________________________________________________ conv2d_12 (Conv2D) (None, 38, 38, 256) 131072 leaky_re_lu_11[0][0] __________________________________________________________________________________________________ batch_normalization_12 (BatchNo (None, 38, 38, 256) 1024 conv2d_12[0][0] __________________________________________________________________________________________________ leaky_re_lu_12 (LeakyReLU) (None, 38, 38, 256) 0 batch_normalization_12[0][0] __________________________________________________________________________________________________ conv2d_13 (Conv2D) (None, 38, 38, 512) 1179648 leaky_re_lu_12[0][0] __________________________________________________________________________________________________ batch_normalization_13 (BatchNo (None, 38, 38, 512) 2048 conv2d_13[0][0] __________________________________________________________________________________________________ leaky_re_lu_13 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_13[0][0] __________________________________________________________________________________________________ max_pooling2d_5 (MaxPooling2D) (None, 19, 19, 512) 0 leaky_re_lu_13[0][0] __________________________________________________________________________________________________ conv2d_14 (Conv2D) (None, 19, 19, 1024) 4718592 max_pooling2d_5[0][0] __________________________________________________________________________________________________ batch_normalization_14 (BatchNo (None, 19, 19, 1024) 4096 conv2d_14[0][0] __________________________________________________________________________________________________ leaky_re_lu_14 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_14[0][0] __________________________________________________________________________________________________ conv2d_15 (Conv2D) (None, 19, 19, 512) 524288 leaky_re_lu_14[0][0] __________________________________________________________________________________________________ batch_normalization_15 (BatchNo (None, 19, 19, 512) 2048 conv2d_15[0][0] __________________________________________________________________________________________________ leaky_re_lu_15 (LeakyReLU) (None, 19, 19, 512) 0 batch_normalization_15[0][0] __________________________________________________________________________________________________ conv2d_16 (Conv2D) (None, 19, 19, 1024) 4718592 leaky_re_lu_15[0][0] __________________________________________________________________________________________________ batch_normalization_16 (BatchNo (None, 19, 19, 1024) 4096 conv2d_16[0][0] __________________________________________________________________________________________________ leaky_re_lu_16 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_16[0][0] __________________________________________________________________________________________________ conv2d_17 (Conv2D) (None, 19, 19, 512) 524288 leaky_re_lu_16[0][0] __________________________________________________________________________________________________ batch_normalization_17 (BatchNo (None, 19, 19, 512) 2048 conv2d_17[0][0] __________________________________________________________________________________________________ leaky_re_lu_17 (LeakyReLU) (None, 19, 19, 512) 0 batch_normalization_17[0][0] __________________________________________________________________________________________________ conv2d_18 (Conv2D) (None, 19, 19, 1024) 4718592 leaky_re_lu_17[0][0] __________________________________________________________________________________________________ batch_normalization_18 (BatchNo (None, 19, 19, 1024) 4096 conv2d_18[0][0] __________________________________________________________________________________________________ leaky_re_lu_18 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_18[0][0] __________________________________________________________________________________________________ conv2d_19 (Conv2D) (None, 19, 19, 1024) 9437184 leaky_re_lu_18[0][0] __________________________________________________________________________________________________ batch_normalization_19 (BatchNo (None, 19, 19, 1024) 4096 conv2d_19[0][0] __________________________________________________________________________________________________ conv2d_21 (Conv2D) (None, 38, 38, 64) 32768 leaky_re_lu_13[0][0] __________________________________________________________________________________________________ leaky_re_lu_19 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_19[0][0] __________________________________________________________________________________________________ batch_normalization_21 (BatchNo (None, 38, 38, 64) 256 conv2d_21[0][0] __________________________________________________________________________________________________ conv2d_20 (Conv2D) (None, 19, 19, 1024) 9437184 leaky_re_lu_19[0][0] __________________________________________________________________________________________________ leaky_re_lu_21 (LeakyReLU) (None, 38, 38, 64) 0 batch_normalization_21[0][0] __________________________________________________________________________________________________ batch_normalization_20 (BatchNo (None, 19, 19, 1024) 4096 conv2d_20[0][0] __________________________________________________________________________________________________ space_to_depth_x2 (Lambda) (None, 19, 19, 256) 0 leaky_re_lu_21[0][0] __________________________________________________________________________________________________ leaky_re_lu_20 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_20[0][0] __________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 19, 19, 1280) 0 space_to_depth_x2[0][0] leaky_re_lu_20[0][0] __________________________________________________________________________________________________ conv2d_22 (Conv2D) (None, 19, 19, 1024) 11796480 concatenate_1[0][0] __________________________________________________________________________________________________ batch_normalization_22 (BatchNo (None, 19, 19, 1024) 4096 conv2d_22[0][0] __________________________________________________________________________________________________ leaky_re_lu_22 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_22[0][0] __________________________________________________________________________________________________ conv2d_23 (Conv2D) (None, 19, 19, 425) 435625 leaky_re_lu_22[0][0] ================================================================================================== Total params: 50,983,561 Trainable params: 50,962,889 Non-trainable params: 20,672 __________________________________________________________________________________________________

    Note: On some computers, you may see a warning message from Keras. Don’t worry about it if you do–it is fine.

    Reminder: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2).

    3.3 - Convert output of the model to usable bounding box tensors

    The output of yolo_model is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following cell does that for you.

    yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))

    You added yolo_outputs to your graph. This set of 4 tensors is ready to be used as input by your yolo_eval function.

    3.4 - Filtering boxes

    yolo_outputs gave you all the predicted boxes of yolo_model in the correct format. You’re now ready to perform filtering and select only the best boxes. Lets now call yolo_eval, which you had previously implemented, to do this.

    scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)

    3.5 - Run the graph on an image

    Let the fun begin. You have created a (sess) graph that can be summarized as follows:

  • yolo_model.input is given to yolo_model. The model is used to compute the output yolo_model.output
  • yolo_model.output is processed by yolo_head. It gives you yolo_outputs
  • yolo_outputs goes through a filtering function, yolo_eval. It outputs your predictions: scores, boxes, classes
  • Exercise: Implement predict() which runs the graph to test YOLO on an image.
    You will need to run a TensorFlow session, to have it compute scores, boxes, classes.

    The code below also uses the following function:

    image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))

    which outputs:
    - image: a python (PIL) representation of your image used for drawing boxes. You won’t need to use it.
    - image_data: a numpy-array representing the image. This will be the input to the CNN.

    Important note: when a model uses BatchNorm (as is the case in YOLO), you will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}.

    def predict(sess, image_file):"""Runs the graph stored in "sess" to predict boxes for "image_file". Prints and plots the preditions.Arguments:sess -- your tensorflow/Keras session containing the YOLO graphimage_file -- name of an image stored in the "images" folder.Returns:out_scores -- tensor of shape (None, ), scores of the predicted boxesout_boxes -- tensor of shape (None, 4), coordinates of the predicted boxesout_classes -- tensor of shape (None, ), class index of the predicted boxesNote: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes. """# Preprocess your imageimage, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))# Run the session with the correct tensors and choose the correct placeholders in the feed_dict.# You'll need to use feed_dict={yolo_model.input: ... , K.learning_phase(): 0})### START CODE HERE ### (≈ 1 line)out_scores, out_boxes, out_classes = sess.run([scores, boxes, classes], feed_dict={yolo_model.input:image_data, K.learning_phase():0})### END CODE HERE #### Print predictions infoprint('Found {} boxes for {}'.format(len(out_boxes), image_file))# Generate colors for drawing bounding boxes.colors = generate_colors(class_names)# Draw bounding boxes on the image filedraw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)# Save the predicted bounding box on the imageimage.save(os.path.join("out", image_file), quality=90)# Display the results in the notebookoutput_image = scipy.misc.imread(os.path.join("out", image_file))imshow(output_image)return out_scores, out_boxes, out_classes

    Run the following cell on the “test.jpg” image to verify that your function is correct.

    out_scores, out_boxes, out_classes = predict(sess, "test.jpg") Found 7 boxes for test.jpg car 0.60 (925, 285) (1045, 374) bus 0.67 (5, 267) (220, 407) car 0.68 (705, 279) (786, 351) car 0.70 (947, 324) (1280, 704) car 0.75 (159, 303) (346, 440) car 0.80 (762, 282) (942, 412) car 0.89 (366, 299) (745, 648)

    Expected Output:

    Found 7 boxes for test.jpg
    car 0.60 (925, 285) (1045, 374)
    car 0.66 (706, 279) (786, 350)
    bus 0.67 (5, 266) (220, 407)
    car 0.70 (947, 324) (1280, 705)
    car 0.74 (159, 303) (346, 440)
    car 0.80 (761, 282) (942, 412)
    car 0.89 (367, 300) (745, 648)

    The model you’ve just run is actually able to detect 80 different classes listed in “coco_classes.txt”. To test the model on your own images:
    1. Click on “File” in the upper bar of this notebook, then click “Open” to go on your Coursera Hub.
    2. Add your image to this Jupyter Notebook’s directory, in the “images” folder
    3. Write your image’s name in the cell above code
    4. Run the code and see the output of the algorithm!

    If you were to run your session in a for loop over all your images. Here’s what you would get:



    Predictions of the YOLO model on pictures taken from a camera while driving around the Silicon Valley
    Thanks drive.ai for providing this dataset!


    What you should remember:
    - YOLO is a state-of-the-art object detection model that is fast and accurate
    - It runs an input image through a CNN which outputs a 19x19x5x85 dimensional volume.
    - The encoding can be seen as a grid where each of the 19x19 cells contains information about 5 boxes.
    - You filter through all the boxes using non-max suppression. Specifically:
    - Score thresholding on the probability of detecting a class to keep only accurate (high probability) boxes
    - Intersection over Union (IoU) thresholding to eliminate overlapping boxes
    - Because training a YOLO model from randomly initialized weights is non-trivial and requires a large dataset as well as lot of computation, we used previously trained model parameters in this exercise. If you wish, you can also try fine-tuning the YOLO model with your own dataset, though this would be a fairly non-trivial exercise.

    References: The ideas presented in this notebook came primarily from the two YOLO papers. The implementation here also took significant inspiration and used many components from Allan Zelener’s github repository. The pretrained weights used in this exercise came from the official YOLO website.
    - Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi - You Only Look Once: Unified, Real-Time Object Detection (2015)
    - Joseph Redmon, Ali Farhadi - YOLO9000: Better, Faster, Stronger (2016)
    - Allan Zelener - YAD2K: Yet Another Darknet 2 Keras
    - The official YOLO website (https://pjreddie.com/darknet/yolo/)

    Car detection dataset:

    The Drive.ai Sample Dataset (provided by drive.ai) is licensed under a Creative Commons Attribution 4.0 International License. We are especially grateful to Brody Huval, Chih Hu and Rahul Patel for collecting and providing this dataset.

    總結

    以上是生活随笔為你收集整理的吴恩达深度学习课程deeplearning.ai课程作业:Class 4 Week 3 Car detection的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

    午夜精品av在线 | 精品久久久久久久久久久院品网 | 91av福利视频 | 天天射成人 | 成人免费在线观看入口 | 国产精品a成v人在线播放 | 丁香五婷 | 日韩精品一区二区三区高清免费 | 日韩在线高清 | 日韩黄色在线 | 日本丶国产丶欧美色综合 | 久久久久久久久久久久久久av | 日韩超碰在线 | 97在线资源 | 久久综合久久综合这里只有精品 | 国产系列精品av | 免费的黄色av | 99久久精品免费看 | 亚洲精品在| 天天操操操操操 | www.亚洲精品 | 一区二区三区免费在线 | av在线免费观看黄 | 激情丁香综合 | 久久精品99国产精品酒店日本 | 国产精品久久婷婷六月丁香 | 337p西西人体大胆瓣开下部 | www.日本色 | 综合网天天 | 亚洲.www | 婷婷综合成人 | 欧美成人精品三级在线观看播放 | 国内精品久久久久影院优 | 中文字幕91视频 | 欧美一区免费在线观看 | 91视频在线网址 | 在线观看国产www | 日韩精品视频免费专区在线播放 | 国产精品成人一区二区 | 天天操伊人 | 99 视频 高清 | 在线观看aaa | 天天插天天爱 | av免费在线观看1 | 在线播放国产精品 | 国产在线精品区 | 黄色a一级视频 | 国产福利电影网址 | 国产69久久久 | 欧美性天天| zzijzzij亚洲成熟少妇 | 伊人国产在线播放 | 亚洲精品国产精品国自 | 亚洲精品国产视频 | 日日干日日操 | 蜜臀一区二区三区精品免费视频 | 蜜桃视频在线观看一区 | www.夜色321.com| 91精品国自产在线观看 | 久久不射网站 | 91在线中文字幕 | 深夜福利视频在线观看 | 国产精品福利在线观看 | 国产精品对白一区二区三区 | 激情久久小说 | 久久久麻豆视频 | 国产资源在线播放 | 在线观看播放av | 丁香六月久久综合狠狠色 | 国产欧美综合在线观看 | 中文字幕在线视频国产 | 美女视频黄免费的 | 六月丁香激情网 | 欧美一级黄色网 | 综合久久一本 | 久久欧美精品 | 国色天香在线 | 免费看黄色小说的网站 | 91亚洲网| 日本护士撒尿xxxx18 | 一区在线观看 | 九九精品久久 | 波多野结衣一区三区 | 欧美黄污视频 | 久久av伊人 | 免费十分钟 | 亚洲成av人片一区二区梦乃 | 国产淫片免费看 | 精品国模一区二区 | 亚洲妇女av | 午夜av免费在线观看 | 欧美一二区在线 | av在线之家电影网站 | 亚洲精品黄网站 | 成年人黄色免费视频 | 婷婷在线观看视频 | 成年人在线观看免费视频 | 久久人人做| 欧美色图亚洲图片 | 麻豆高清免费国产一区 | 国产麻豆精品久久一二三 | 97日日碰人人模人人澡分享吧 | av在线播放国产 | 婷婷色5月| 在线观看精品黄av片免费 | 国产亚洲精品久久久网站好莱 | 日韩精品中文字幕在线 | 一二三区视频在线 | www..com毛片 | 特片网久久 | 日本久久久久久久久久久 | 精品电影一区二区 | 亚洲免费观看视频 | 99热官网| www黄在线 | 在线免费观看黄色小说 | 国产视频一区二区在线 | 在线观看色网 | 日韩a免费 | 99久久精| 成 人 黄 色 视频免费播放 | 色.www | 久久99国产综合精品免费 | 操一草 | 一级黄色片在线观看 | 91看片在线 | 久久免费毛片视频 | 美女啪啪图片 | 久久久久亚洲天堂 | 少妇性bbb搡bbb爽爽爽欧美 | 欧美另类xxxx | 欧洲性视频 | 黄色特级一级片 | 欧美aⅴ在线观看 | 国产色影院 | www激情久久| 国产黄色免费电影 | 九九影视理伦片 | 欧美在线观看视频一区二区 | 国产97在线观看 | 久久艹精品| 国产在线免费观看 | 天天射天天操天天色 | 久久草视频 | 亚洲综合网站在线观看 | 国产精品va在线观看入 | 91精品入口 | 中文字幕一区二区三区乱码在线 | 色播五月婷婷 | 久久美女视频 | 黄色毛片视频免费 | 亚洲精品国产成人av在线 | 最近高清中文在线字幕在线观看 | 在线国产中文字幕 | www.午夜视频 | 日韩激情视频在线观看 | 免费av看片 | 91网站在线视频 | 色在线免费观看 | 国产999视频在线观看 | 成人久久国产 | 黄色成人影视 | 欧美激情综合五月 | 欧美一区,二区 | zzijzzij亚洲日本少妇熟睡 | 国产亚洲在线 | 国产日韩精品一区二区在线观看播放 | 99视频精品全国免费 | 三级黄色网络 | 91九色在线视频观看 | 国产成人高清在线 | 日韩二区在线播放 | 成人午夜影院在线观看 | 日本在线观看一区二区三区 | 天天干天天碰 | 中国一级特黄毛片大片久久 | 中文字幕av有码 | 亚洲激情视频在线观看 | 国产精品久久久久久久7电影 | 黄色日视频| 欧美日韩视频在线观看免费 | 一区二区三区国产欧美 | 国产伦精品一区二区三区… | 国产精品一区在线观看你懂的 | 国产夫妻性生活自拍 | 国产在线1区 | 亚洲精品高清一区二区三区四区 | 日韩免费电影 | 成人av在线影院 | 天天综合操 | 激情九九 | 亚洲理论片 | 色欲综合视频天天天 | 91在线国产观看 | 91免费高清 | 天天干 天天摸 天天操 | 国产精品18久久久久vr手机版特色 | 丁香六月婷婷开心婷婷网 | 爱爱av在线 | 免费在线观看一级片 | 中文字幕av最新更新 | .国产精品成人自产拍在线观看6 | 五月婷婷色播 | 日韩av免费一区 | 超碰在线个人 | 国产精彩视频一区二区 | 日韩美女一级片 | 香蕉在线影院 | 中文字幕在线看人 | 亚洲成人av片在线观看 | 日韩欧美高清一区二区 | 综合色站 | 久久久久国产成人精品亚洲午夜 | 国产精品色 | 亚洲人人精品 | 国产做爰视频 | 欧美日韩不卡一区二区 | 国产成人精品免高潮在线观看 | 男女视频91 | 超黄视频网站 | 99视频这里只有 | 日韩精品中文字幕av | 免费下载高清毛片 | www.狠狠操.com| 国产一区二区免费在线观看 | 香蕉手机在线 | 国产精品女主播一区二区三区 | 欧洲一区二区三区精品 | 人人爽人人爽人人片 | 亚洲精品乱码久久久一二三 | 黄影院| 国产精品一区免费看8c0m | 欧美日韩国产精品一区二区亚洲 | 在线观看一级片 | 日韩精品一区二区三区在线播放 | 99久久久国产精品 | 日韩一级电影在线观看 | 国产精品v欧美精品 | 国产成人精品一二三区 | 顶级欧美色妇4khd | 国产精品系列在线观看 | 国产精品一区二区在线 | 中文字幕日韩伦理 | 2019中文在线观看 | 精品久久久精品 | 久久久久国产a免费观看rela | 99精品毛片| 天天干天天干天天色 | 四季av综合网站 | 女人高潮特级毛片 | 在线观看免费黄视频 | av一级久久 | 日韩免费| 国产一区视频免费在线观看 | 中文字幕在线色 | 国产视频一二区 | av中文字幕在线电影 | 亚洲精品啊啊啊 | 国产视频精选在线 | 日韩av在线免费播放 | 欧美性网站 | 超碰99在线 | 亚洲三级黄色 | 一区二区三区四区免费视频 | 国产欧美中文字幕 | 欧美天天综合 | 国产视频91在线 | 国产中文字幕在线免费观看 | 国产黄色看片 | 四虎影视精品成人 | 91成人天堂久久成人 | 日韩欧美在线综合网 | 亚洲人成网站精品片在线观看 | 久久久久久麻豆 | 天天干天天做 | 91成人午夜 | 2021国产精品视频 | 一区二区三区韩国免费中文网站 | 亚洲综合视频在线 | 国产精品久久久久久一区二区三区 | 成 人 黄 色 视频免费播放 | 欧美日韩亚洲第一页 | 亚洲一片黄| 黄色免费av | 五月天九九 | 超碰人在线 | 一区二区三区在线免费观看 | 美女黄频| 国产精品欧美精品 | 97精品国产97久久久久久粉红 | 一级免费av | 成人午夜电影在线观看 | 97福利在线观看 | 中文字幕在线视频国产 | 欧美做受高潮1 | 久久免费一级片 | 久久精品一区二区 | 国产亚洲精品久久网站 | 日韩欧美视频在线免费观看 | 亚洲黄网站 | 欧美,日韩 | 福利网址在线观看 | 欧美性做爰猛烈叫床潮 | 久操视频在线播放 | 久久久www成人免费毛片 | 国产欧美日韩一区 | 视频国产一区二区三区 | 波多野结衣网址 | 99人成在线观看视频 | 国产精品久久久久久久久毛片 | 中文字幕国产在线 | 欧美精品在线免费 | 91伊人久久大香线蕉蜜芽人口 | 亚洲精品一区中文字幕乱码 | 国产精品v欧美精品 | 狠狠狠狠狠干 | 亚洲成成品网站 | 国产女教师精品久久av | 国产高清视频在线免费观看 | 色婷婷色| 国产亚洲成人精品 | 欧美日韩视频在线播放 | 国产精品福利在线 | 午夜精品一区二区国产 | 在线最新av | 波多野结衣视频一区 | 99色视频 | 亚洲国产精品久久 | 精品久久一区 | 在线观看亚洲精品 | 国产精品美女999 | 国产一区在线免费观看视频 | 色a4yy| 亚洲爱爱视频 | 亚洲国产视频在线 | 91精品国产91久久久久 | 欧美日韩免费看 | 午夜精品久久久久 | 精品免费视频. | 一区二区三区视频网站 | 日韩成人在线免费观看 | 中文字幕在线日亚洲9 | 国产亚洲一区二区三区 | 麻豆传媒视频在线免费观看 | 久久精品电影 | 免费看一及片 | 中文理论片| 九九色综合 | 欧美一区二区视频97 | 人人干人人艹 | 视频91在线 | japanesexxxhd奶水| av网站在线观看播放 | 国产成人精品午夜在线播放 | 怡红院av久久久久久久 | 黄色一级大片在线免费看产 | 国内精品一区二区 | 天天操天天爱天天爽 | 国产精品都在这里 | 国产午夜在线 | 国产黄免费在线观看 | 久久99精品热在线观看 | 国产精选视频 | 成人黄色毛片 | 97国产精品久久 | 国产成人精品亚洲a | 在线草| 一区二区三区免费 | 成人久久久久久久久久 | 欧美人人爱 | 免费看短 | 欧美精品一区二区在线观看 | 色噜噜日韩精品一区二区三区视频 | 又长又大又黑又粗欧美 | 国产一区视频在线 | 成年人在线电影 | 日韩最新在线视频 | 精品美女在线视频 | 欧美日韩精品在线播放 | 久久久久久久久电影 | 人人看人人爱 | 亚洲国产精品成人精品 | 国产丝袜美腿在线 | va视频在线| 日韩免费一区二区 | 中文字幕一区二区三区在线观看 | 丁香婷婷激情 | 天天综合成人 | 婷婷综合伊人 | 激情小说网站亚洲综合网 | 99精品视频一区二区 | 在线观看黄色国产 | 精品亚洲视频在线观看 | 欧美日韩二三区 | 久久毛片网站 | 欧美韩国在线 | 中文av一区二区 | 亚洲精品日韩一区二区电影 | 亚洲专区在线 | 99超碰在线观看 | 午夜资源站 | 久久精品国产一区二区 | 天天操天天草 | www.com久久久 | 奇米影视8888在线观看大全免费 | 麻花天美星空视频 | 综合网色 | 精品伊人久久久 | 蜜桃av人人夜夜澡人人爽 | 97中文字幕 | 亚洲视频一级 | 五月综合色| 日本中文字幕在线观看 | 国产一线在线 | 精品久久久久久久久久久久久久久久 | 中文字幕在线播放一区二区 | 中文字幕在线电影 | 亚洲成a人片综合在线 | 亚洲综合五月天 | 亚洲精品黄色在线观看 | 黄色资源网站 | 97在线观看免费高清完整版在线观看 | 高清精品久久 | 欧美热久久 | 国产黄a三级三级三级三级三级 | 日韩在线网 | 欧洲精品在线视频 | 日韩在线观看视频一区二区三区 | 精品夜夜嗨av一区二区三区 | 天天干天天做天天操 | www.国产视频 | 麻豆国产精品永久免费视频 | 成人高清在线 | 99久久久久成人国产免费 | 在线一二三四区 | 精品国产一区二区三区四区在线观看 | 日韩在线网址 | 国产精品破处视频 | 蜜臀aⅴ精品一区二区三区 久久视屏网 | 久久综合九色欧美综合狠狠 | 国产玖玖视频 | 国产精品一区二区三区久久久 | 婷婷综合五月天 | 夜夜躁狠狠躁日日躁视频黑人 | 蜜臀久久99静品久久久久久 | 免费激情在线电影 | 久草在线在线精品观看 | 中文字幕日本特黄aa毛片 | 狂野欧美激情性xxxx | 中文字幕在线观看的网站 | 成人国产精品入口 | 亚洲国产99 | 视频一区二区国产 | 日韩3区 | 久草精品视频 | 日韩a欧美 | 尤物一区二区三区 | 精品国产一区二区三区噜噜噜 | 久草精品免费 | 一区二区三区动漫 | 97人人模人人爽人人少妇 | 久久色在线播放 | 久久免视频 | 99爱国产精品 | 国产成人一级 | 亚洲精品视频偷拍 | 久久avav| 丁香婷婷射| 福利一区二区三区四区 | 97碰在线视频 | 91网免费观看 | 在线观看免费av网 | 久久嗨| 国产精品影音先锋 | 免费观看一级特黄欧美大片 | 波多野结衣一区二区三区中文字幕 | 91综合视频在线观看 | 国产一级视频 | 四虎影视成人永久免费观看亚洲欧美 | 国产乱码精品一区二区蜜臀 | 国产午夜在线 | 人操人| 免费国产在线视频 | 四虎成人精品永久免费av九九 | av 一区二区三区 | 不卡中文字幕在线 | 国产专区在线视频 | 夜色.com| 久久久香蕉视频 | 视频在线91| 中文字幕中文字幕在线中文字幕三区 | 国产精品女人久久久久久 | 国产精品成人一区二区三区 | 欧美极品少妇xxxxⅹ欧美极品少妇xxxx亚洲精品 | 国产剧情av在线播放 | 日韩欧美精品在线观看 | 日日碰狠狠添天天爽超碰97久久 | 日韩免费一区二区三区 | 色在线视频 | 国产亚洲视频中文字幕视频 | 99在线热播 | 91精彩在线视频 | 成人h视频在线播放 | 美国人与动物xxxx | 69精品在线观看 | 97视频一区| 午夜视频免费播放 | 天天爱综合 | 九九涩涩av台湾日本热热 | 一级免费黄色 | 69久久99精品久久久久婷婷 | 91污在线观看 | 中文字幕在线看视频国产中文版 | 中文字幕欧美激情 | 亚洲国产成人在线观看 | 99在线精品视频 | 久久国产99| av女优中文字幕在线观看 | 中文国产在线观看 | 91精品1区 | japanesexxx乱女另类 | 日韩高清一区 | 亚洲天堂va | 久久久久免费精品 | 欧美日韩视频网站 | 黄p网站在线观看 | 亚洲免费公开视频 | 最近日本韩国中文字幕 | 色干干 | 97超碰网| av大片免费看 | 99免费在线播放99久久免费 | 成人h电影在线观看 | 黄色免费观看网址 | 午夜色性片 | 超碰在线官网 | 大荫蒂欧美视频另类xxxx | 欧美色图88 | 玖玖在线看 | 欧美日韩国内在线 | 在线免费观看羞羞视频 | 免费视频你懂得 | 西西www4444大胆视频 | 伊人天堂av | 天天色天天综合 | 欧美日韩免费视频 | 日韩大片在线播放 | 国产成人av电影在线 | 成人性生交大片免费观看网站 | 午夜av影院 | 51久久成人国产精品麻豆 | 久草.com| 亚洲激情网站免费观看 | 三级av在线播放 | 深爱激情久久 | 免费男女羞羞的视频网站中文字幕 | 午夜精品一区二区三区在线观看 | 人人干人人干人人干 | 在线视频电影 | 夜夜骑天天操 | 国产免码va在线观看免费 | 国产一线二线三线在线观看 | 在线观看视频国产一区 | av888.com| 日本黄色免费大片 | 亚洲综合在线观看视频 | 成人影片在线播放 | 一本色道久久综合亚洲二区三区 | 亚洲免费观看视频 | 丁香国产视频 | 一区二区三区四区五区在线 | av综合站| 天天躁日日躁狠狠躁 | 日韩在观看线 | 欧美另类一二三四区 | 人人超碰免费 | 国产va饥渴难耐女保洁员在线观看 | 欧美一区二区精美视频 | 免费视频18| 免费在线色电影 | 一级电影免费在线观看 | 91亚洲精品久久久 | 亚洲视频电影在线 | 91天天视频| 丁香婷婷在线观看 | 特级片免费看 | 中字幕视频在线永久在线观看免费 | 色瓜 | 香蕉视频国产在线观看 | 国产精品成人一区二区三区吃奶 | 亚洲免费在线观看视频 | 国产精品视频免费观看 | 中文在线a√在线 | 一区二区三区四区精品 | 98涩涩国产露脸精品国产网 | 日韩一区在线免费观看 | 综合色伊人 | 国产精品人人做人人爽人人添 | 麻豆免费视频 | 色婷婷国产精品 | 免费影视大全推荐 | 嫩草伊人久久精品少妇av | 不卡视频在线 | av大片免费| 国产理论片在线观看 | 在线91精品 | 久久久久久福利 | 久草视频免费在线观看 | 久久伊人婷婷 | 天天色天天综合网 | 成人免费共享视频 | 色婷婷伊人 | 在线免费高清视频 | 欧美成人高清 | 六月婷操 | 色视频网页 | 91精品国产一区二区在线观看 | bbw av| 国内精品在线观看视频 | 爱爱一区 | 亚洲精品一区中文字幕乱码 | 草久在线播放 | 久久影院一区 | 人人干在线 | 免费成人在线网站 | 色瓜 | 成人国产网站 | 成人免费视频播放 | 午夜精品电影 | 91在线免费观看国产 | 日本h视频在线观看 | 婷婷精品在线视频 | 黄色av成人在线观看 | 欧美一级久久 | 久久久久五月天 | 99热都是精品 | 国产v在线播放 | 美女精品在线观看 | 又色又爽又黄高潮的免费视频 | 97国产电影 | 18国产精品福利片久久婷 | 亚洲天堂自拍视频 | 欧美日韩一区二区三区不卡 | 久国产在线播放 | 天天爱天天色 | 日本性生活免费看 | 国产日韩欧美在线观看视频 | 婷婷激情久久 | 成人性生交大片免费观看网站 | 在线观看av免费观看 | 综合色综合 | 伊人影院99 | 一区二区三区四区五区在线 | 成人免费视频在线观看 | 日韩欧美高清一区二区三区 | 欧美一级xxxx| 最近免费观看的电影完整版 | 九九精品久久久 | 天天色天天综合 | 中文字幕在线观看视频一区二区三区 | 亚洲免费精彩视频 | 天堂在线视频中文网 | 三级在线视频播放 | 一区二区三区在线视频111 | 国产一区二区久久久 | 日日夜操 | 欧美一级淫片videoshd | 91亚洲精品久久久中文字幕 | 91成人短视频在线观看 | 97免费在线观看视频 | 综合在线观看色 | 久久综合九色综合97婷婷女人 | www.综合网.com | 国产精品久久久久毛片大屁完整版 | 狠狠色狠狠色终合网 | 天堂在线视频免费观看 | 国产精品久久二区 | 美女在线免费视频 | 久久国产精品99国产 | 69av国产| а天堂中文最新一区二区三区 | 毛片一区二区 | 嫩模bbw搡bbbb搡bbbb | 草久在线视频 | 亚洲国产小视频在线观看 | 亚洲精品在线视频 | 免费三级网 | 有码视频在线观看 | 亚洲综合色婷婷 | 香蕉视频在线免费看 | 美女免费视频网站 | 91一区啪爱嗯打偷拍欧美 | 中文字幕2021 | 久久天天躁狠狠躁夜夜不卡公司 | 日韩欧美中文 | 69国产在线观看 | 狠狠操狠狠干2017 | 黄色官网在线观看 | 欧美a免费 | 婷婷在线色 | 免费男女羞羞的视频网站中文字幕 | 麻豆国产视频下载 | 国产成人精品午夜在线播放 | 主播av在线 | 天天色天天干天天色 | 久久9999久久免费精品国产 | 91精品国产91热久久久做人人 | 欧美小视频在线 | 十八岁以下禁止观看的1000个网站 | 91禁在线看 | 亚洲国产成人在线观看 | 中文字幕av播放 | 91高清免费 | av免费看在线 | 天天综合久久 | 亚洲精品午夜久久久久久久 | 91精品国产一区二区在线观看 | 国产精品18久久久久久首页狼 | 国产黑丝一区二区三区 | 成人欧美一区二区三区黑人麻豆 | 在线电影中文字幕 | 国产免费专区 | 日本xxxx裸体xxxx17 | 免费观看黄色12片一级视频 | 性色大片在线观看 | 国产不卡一二三区 | 狠狠操狠狠插 | 98久久| 天天操天操 | 日本三级久久久 | 国产麻豆电影在线观看 | 在线播放 亚洲 | 中文字幕一区二区三区在线观看 | 国产日韩精品在线 | 国产精品男女啪啪 | 国产视频1| 久久久久久久久久福利 | 奇米影视8888在线观看大全免费 | 亚洲国产精品女人久久久 | 成人黄色影片在线 | 蜜臀av性久久久久蜜臀aⅴ四虎 | 日韩一区二区三区视频在线 | 国产精品综合在线 | 久久er99热精品一区二区三区 | 99精品国产99久久久久久福利 | 日韩午夜大片 | 免费欧美精品 | 亚洲精欧美一区二区精品 | 久久精品视频在线观看 | 91av原创| 久久久国产精品一区二区三区 | 国产在线观看免费 | 国产精品久久久久久妇 | 十八岁以下禁止观看的1000个网站 | sesese图片 | 欧美日韩在线网站 | 久久久久综合网 | av日韩不卡 | 国产网红在线 | 亚洲成人av一区二区 | aaa黄色毛片 | 免费观看一级特黄欧美大片 | 国产精品久久久久三级 | 亚洲天堂网在线观看视频 | 在线国产欧美 | 在线香蕉视频 | 国产一区二区电影在线观看 | 亚洲综合在 | 久久99国产一区二区三区 | 欧美精品久久久久久久久久久 | 日韩欧美国产成人 | 中文字幕观看av | 激情偷乱人伦小说视频在线观看 | 国产小视频国产精品 | 欧美色综合久久 | 在线观看黄色av | 亚洲精品成人av在线 | 天天爽网站 | 亚洲电影黄色 | 国产99久久久精品 | 久久人人射 | 精品国产一区二区久久 | 国产精品久久久区三区天天噜 | 国产精品夜夜夜一区二区三区尤 | 日韩在线视频二区 | a级国产片 | 综合在线观看 | 特级毛片网 | 亚洲综合欧美激情 | 99看视频在线观看 | 精品国精品自拍自在线 | 热久久国产 | 免费在线激情视频 | 日韩com| 免费视频久久久久 | 人人看人人爱 | 中文字幕丝袜一区二区 | 亚洲日韩中文字幕 | 亚洲精品tv | 国产精品国产三级国产aⅴ无密码 | 99久久精品久久久久久清纯 | 99久久久国产精品免费观看 | 国产精品视频全国免费观看 | 日韩高清在线看 | 婷婷av电影 | 91精品啪啪 | 国产一区二区不卡视频 | 亚洲国产一区二区精品专区 | 美女免费视频网站 | 91在线视频网址 | 国产 一区二区三区 在线 | 国产精品1区2区3区在线观看 | av免费在线免费观看 | 日韩免费视频一区二区 | 日韩一区二区三区在线观看 | 亚洲激情在线播放 | 国产精品免费视频观看 | 免费视频久久久久 | 久久在线免费视频 | 草免费视频| 在线精品一区二区 | 欧美午夜激情网 | 国产成人久久精品77777 | 国产精品av久久久久久无 | 在线中文字幕视频 | 久久精品播放 | 在线免费视频a | 婷婷日 | 在线观看av免费观看 | 成年人视频在线免费播放 | 91福利区一区二区三区 | 亚洲精品国产精品国自产观看浪潮 | 日韩有码欧美 | 午夜色站 | 99在线精品视频在线观看 | 国产高清日韩欧美 | 久久理论电影 | 久草在 | 中文字幕韩在线第一页 | 91在线精品播放 | 久草视频在线资源 | 91麻豆高清视频 | 天天舔天天射天天操 | 最近最新mv字幕免费观看 | 久久国产精品偷 | 久久综合狠狠综合久久激情 | 日韩专区一区二区 | 久久国产精品99久久久久久老狼 | 亚洲成av人影院 | 在线之家官网 | 国产一级二级三级在线观看 | 日本激情视频中文字幕 | 808电影| 国产精品毛片一区二区在线 | av一级片网站 | 久久久久在线 | 中文字幕在线观看的网站 | 四虎最新域名 | 免费aa大片 | 久草资源在线 | 国产精品久久视频 | 天天综合网在线 | 色狠狠婷婷 | 91精品久久香蕉国产线看观看 | 亚洲黄色小说网 | 色婷婷www| 亚洲激色| 日韩高清国产精品 | 中文字幕在线影院 | 国产亚洲精品久久久久久 | 狠狠躁夜夜a产精品视频 | 在线观看色网 | 国产亚洲精品精品精品 | 亚洲乱亚洲乱亚洲 | 色在线网| 日韩经典一区二区三区 | 亚洲成人资源 | 日韩精品免费专区 | 久久xx视频 | 青青河边草观看完整版高清 | 91精品国产91热久久久做人人 | 国产精品自产拍在线观看网站 | 欧美日韩久 | 在线国产激情视频 | 五月开心婷婷 | 少妇性色午夜淫片aaaze | 亚洲欧洲成人精品av97 | 国产高清在线精品 | a级国产乱理伦片在线观看 亚洲3级 | 亚洲精品乱码久久久久久蜜桃不爽 | 日韩一区二区免费播放 | 国产一区精品在线观看 | 欧美福利网址 | 九九在线高清精品视频 | 久久综合桃花 | 波多野结衣视频网址 | 日韩大片在线免费观看 | 超黄视频网站 | 亚洲一区二区麻豆 | 国产高清视频在线播放 | 久久尤物电影视频在线观看 | 亚洲日本一区二区在线 | 欧美一级视频在线观看 | 精品视频在线免费 | 亚洲午夜久久久久久久久电影网 | 六月激情婷婷 | 天天操天天干天天摸 | 99免费在线播放99久久免费 | 国产免费一区二区三区最新 | 久久久久国产精品免费免费搜索 | 国产又粗又猛又黄视频 | 欧美一区三区四区 | 久久久午夜精品理论片中文字幕 | 91女人18片女毛片60分钟 | 伊人久久在线观看 | 91看片麻豆 | 国产欧美日韩视频 | 最新国产视频 | 国产一区二区在线看 | 中文字幕乱码电影 | 成人观看 | 久草在线最新免费 | 很黄很黄的网站免费的 | 久久国产系列 | 精品国产三级a∨在线欧美 免费一级片在线观看 | 人成午夜视频 | 91九色蝌蚪国产 | 黄色在线免费观看网站 | 久久精品99北条麻妃 | 国产免费嫩草影院 | 91精品国产九九九久久久亚洲 | av不卡中文字幕 | 久久精品99久久久久久 | 香蕉视频在线视频 | 日日久视频| 在线视频你懂 | 欧美韩日在线 | 国产精品美女久久久久久久网站 | 91精品免费 | 久久超碰在线 | 欧美男男激情videos | 美女精品国产 | 久草免费资源 | 18岁免费看片 | 欧美日韩国产一二三区 | 色多多污污在线观看 | 日本久久久久久久久 | 国产精品高清在线观看 | 91九色蝌蚪国产 | 懂色av一区二区在线播放 | 521色香蕉网站在线观看 | 久久视频二区 | 精品一区二区电影 | 亚洲第一香蕉视频 | 91视频久久久久 | 国产精品原创在线 | 久久精品亚洲综合专区 | 国产精品专区h在线观看 | 国产成人a亚洲精品v | 精品一区三区 | 亚洲精品视频一二三 | 日韩欧美高清不卡 | 中国精品一区二区 | av在线com | 国产成人一区二区三区在线观看 | 久操久| 欧美激情视频一区二区三区免费 | 超级碰碰碰免费视频 | 9999在线| 五月天综合色激情 | 天天射色综合 | 韩国一区二区在线观看 | 在线免费观看黄色 | 最新午夜电影 | 欧美男同视频网站 | ,午夜性刺激免费看视频 | 中文字幕亚洲不卡 | a在线观看视频 | av电影免费在线播放 | 久二影院| 久久草草影视免费网 | 成人高清av在线 | 97国产人人 | 中文字幕乱码日本亚洲一区二区 | 99精品视频网站 | 国产精品久久久久久久久久直播 | 免费无遮挡动漫网站 | 久久综合激情 | 韩国精品在线 | 精品在线亚洲视频 | 色婷婷99| 美女av免费看 | av大全在线播放 | www免费黄色| 成人理论在线观看 | 夜夜骑日日操 | 精品一区二区免费在线观看 | 色噜噜狠狠狠狠色综合 |