日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人工智能 > pytorch >内容正文

pytorch

吴恩达深度学习课程deeplearning.ai课程作业:Class 4 Week 3 Car detection

發布時間:2025/3/21 pytorch 38 豆豆
生活随笔 收集整理的這篇文章主要介紹了 吴恩达深度学习课程deeplearning.ai课程作业:Class 4 Week 3 Car detection 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

吳恩達deeplearning.ai課程作業,自己寫的答案。
補充說明:
1. 評論中總有人問為什么直接復制這些notebook運行不了?請不要直接復制粘貼,不可能運行通過的,這個只是notebook中我們要自己寫的那部分,要正確運行還需要其他py文件,請自己到GitHub上下載完整的。這里的部分僅僅是參考用的,建議還是自己按照提示一點一點寫,如果實在卡住了再看答案。個人覺得這樣才是正確的學習方法,況且作業也不算難。
2. 關于評論中有人說我是抄襲,注釋還沒別人詳細,復制下來還運行不過。答復是:做伸手黨之前,請先搞清這個作業是干什么的。大家都是從GitHub上下載原始的作業,然后根據代碼前面的提示(通常會指定函數和公式)來編寫代碼,而且后面還有expected output供你比對,如果程序正確,結果一般來說是一樣的。請不要無腦噴,說什么跟別人的答案一樣的。說到底,我們要做的就是,看他的文字部分,根據提示在代碼中加入部分自己的代碼。我們自己要寫的部分只有那么一小部分代碼。
3. 由于實在很反感無腦噴子,故禁止了下面的評論功能,請見諒。如果有問題,請私信我,在力所能及的范圍內會盡量幫忙。

準備工作:

這一課的主要實踐的是yolo算法,后面會用到一個yolo.h5模型。這個模型需要我們自己到yolo官方網站上下載,制作出來h5格式的模型供python讀取。
在github上給出了一些步驟:(鏈接:https://github.com/allanzelener/YAD2K)
如果嫌自己制作太麻煩,請直接右上角轉百度云。
我制作好的yolo.h5文件的百度云鏈接:
鏈接:https://pan.baidu.com/s/1dGbyycT 密碼:xgr2

配置環境


他用到了anaconda,啟動了一個新的環境(yad2k)。
直接使用默認的python環境也是一樣的,但是一定要是python3。

下載已經訓練好的模型的cfg和weights文件:

wget http://pjreddie.com/media/files/yolo.weights wget https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolo.cfg

生成我們想要的h5文件

由于程序中用的是keras框架,要轉成h5文件后,才能讀進去。

python3 yad2k.py yolo.cfg yolo.weights model_data/yolo.h5

Autonomous driving - Car detection

Welcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: Redmon et al., 2016 (https://arxiv.org/abs/1506.02640) and Redmon and Farhadi, 2016 (https://arxiv.org/abs/1612.08242).

You will learn to:
- Use object detection on a car detection dataset
- Deal with bounding boxes

Run the following cell to load the packages and dependencies that are going to be useful for your journey!

import argparse import os import matplotlib.pyplot as plt from matplotlib.pyplot import imshow import scipy.io import scipy.misc import numpy as np import pandas as pd import PIL import tensorflow as tf from keras import backend as K from keras.layers import Input, Lambda, Conv2D from keras.models import load_model, Model from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body%matplotlib inline Using TensorFlow backend.

Important Note: As you can see, we import Keras’s backend as K. This means that to use a Keras function in this notebook, you will need to write: K.function(...).

1 - Problem Statement

You are working on a self-driving car. As a critical component of this project, you’d like to first build a car detection system. To collect data, you’ve mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around.



Pictures taken from a car-mounted camera while driving around Silicon Valley.
We would like to especially thank drive.ai for providing this dataset! Drive.ai is a company building the brains of self-driving vehicles.

You’ve gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here’s an example of what your bounding boxes look like.


Figure 1 : Definition of a box

If you have 80 classes that you want YOLO to recognize, you can represent the class label cc either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step.

In this exercise, you will learn how YOLO works, then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use.

2 - YOLO

YOLO (“you only look once”) is a popular algoritm because it achieves high accuracy while also being able to run in real-time. This algorithm “only looks once” at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes.

2.1 - Model details

First things to know:
- The input is a batch of images of shape (m, 608, 608, 3)
- The output is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers (pc,bx,by,bh,bw,c)(pc,bx,by,bh,bw,c) as explained above. If you expand cc into an 80-dimensional vector, each bounding box is then represented by 85 numbers.

We will use 5 anchor boxes. So you can think of the YOLO architecture as the following: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85).

Lets look in greater detail at what this encoding represents.

Figure 2 : Encoding architecture for YOLO

If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object.

Since we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.

For simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425).


Figure 3 : Flattening the last two last dimensions

Now, for each box (of each cell) we will compute the following elementwise product and extract a probability that the box contains a certain class.

Figure 4 : Find the class detected by each box

Here’s one way to visualize what YOLO is predicting on an image:
- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across both the 5 anchor boxes and across different classes).
- Color that grid cell according to what object that grid cell considers the most likely.

Doing this results in this picture:


Figure 5 : Each of the 19x19 grid cells colored according to which class has the largest predicted probability in that cell.

Note that this visualization isn’t a core part of the YOLO algorithm itself for making predictions; it’s just a nice way of visualizing an intermediate result of the algorithm.

Another way to visualize YOLO’s output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this:


Figure 6 : Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes.

In the figure above, we plotted only boxes that the model had assigned a high probability to, but this is still too many boxes. You’d like to filter the algorithm’s output down to a much smaller number of detected objects. To do so, you’ll use non-max suppression. Specifically, you’ll carry out these steps:
- Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class)
- Select only one box when several boxes overlap with each other and detect the same object.

2.2 - Filtering with a threshold on class scores

You are going to apply a first filter by thresholding. You would like to get rid of any box for which the class “score” is less than a chosen threshold.

The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It’ll be convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables:
- box_confidence: tensor of shape (19×19,5,1)(19×19,5,1) containing pcpc (confidence probability that there’s some object) for each of the 5 boxes predicted in each of the 19x19 cells.
- boxes: tensor of shape (19×19,5,4)(19×19,5,4) containing (bx,by,bh,bw)(bx,by,bh,bw) for each of the 5 boxes per cell.
- box_class_probs: tensor of shape (19×19,5,80)(19×19,5,80) containing the detection probabilities (c1,c2,...c80)(c1,c2,...c80) for each of the 80 classes for each of the 5 boxes per cell.

Exercise: Implement yolo_filter_boxes().
1. Compute box scores by doing the elementwise product as described in Figure 4. The following code may help you choose the right operator:

a = np.random.randn(19*19, 5, 1) b = np.random.randn(19*19, 5, 80) c = a * b # shape of c will be (19*19, 5, 80)
  • For each box, find:
    • the index of the class with the maximum box score (Hint) (Be careful with what axis you choose; consider using axis=-1)
    • the corresponding box score (Hint) (Be careful with what axis you choose; consider using axis=-1)
  • Create a mask by using a threshold. As a reminder: ([0.9, 0.3, 0.4, 0.5, 0.1] < 0.4) returns: [False, True, False, False, True]. The mask should be True for the boxes you want to keep.
  • Use TensorFlow to apply the mask to box_class_scores, boxes and box_classes to filter out the boxes we don’t want. You should be left with just the subset of boxes you want to keep. (Hint)
  • Reminder: to call a Keras function, you should use K.function(...).

    # GRADED FUNCTION: yolo_filter_boxesdef yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):"""Filters YOLO boxes by thresholding on object and class confidence.Arguments:box_confidence -- tensor of shape (19, 19, 5, 1)boxes -- tensor of shape (19, 19, 5, 4)box_class_probs -- tensor of shape (19, 19, 5, 80)threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding boxReturns:scores -- tensor of shape (None,), containing the class probability score for selected boxesboxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxesclasses -- tensor of shape (None,), containing the index of the class detected by the selected boxesNote: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold. For example, the actual output size of scores would be (10,) if there are 10 boxes."""# Step 1: Compute box scores### START CODE HERE ### (≈ 1 line)box_scores = box_confidence * box_class_probs### END CODE HERE #### Step 2: Find the box_classes thanks to the max box_scores, keep track of the corresponding score### START CODE HERE ### (≈ 2 lines)box_classes = K.argmax(box_scores, axis=-1)box_class_scores = K.max(box_scores, axis=-1, keepdims=False) # print(box_classes.shape) # print(box_class_scores.shape)### END CODE HERE #### Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the# same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)### START CODE HERE ### (≈ 1 line)filtering_mask = box_class_scores >= threshold### END CODE HERE #### Step 4: Apply the mask to scores, boxes and classes### START CODE HERE ### (≈ 3 lines) # print(box_class_scores.shape) # print(filtering_mask.shape)scores = tf.boolean_mask(box_class_scores, filtering_mask)boxes = tf.boolean_mask(boxes, filtering_mask)classes = tf.boolean_mask(box_classes, filtering_mask)### END CODE HERE ###return scores, boxes, classes with tf.Session() as test_a:box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)print("scores[2] = " + str(scores[2].eval()))print("boxes[2] = " + str(boxes[2].eval()))print("classes[2] = " + str(classes[2].eval()))print("scores.shape = " + str(scores.shape))print("boxes.shape = " + str(boxes.shape))print("classes.shape = " + str(classes.shape)) scores[2] = 10.7506 boxes[2] = [ 8.42653275 3.27136683 -0.5313437 -4.94137383] classes[2] = 7 scores.shape = (?,) boxes.shape = (?, 4) classes.shape = (?,)

    Expected Output:

    scores[2] 10.7506
    boxes[2] [ 8.42653275 3.27136683 -0.5313437 -4.94137383]
    classes[2] 7
    scores.shape (?,)
    boxes.shape (?, 4)
    classes.shape (?,)

    2.3 - Non-max suppression

    Even after filtering by thresholding over the classes scores, you still end up a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS).


    Figure 7 : In this example, the model has predicted 3 cars, but it’s actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probabiliy) one of the 3 boxes.

    Non-max suppression uses the very important function called “Intersection over Union”, or IoU.


    Figure 8 : Definition of “Intersection over Union”.

    Exercise: Implement iou(). Some hints:
    - In this exercise only, we define a box using its two corners (upper left and lower right): (x1, y1, x2, y2) rather than the midpoint and height/width.
    - To calculate the area of a rectangle you need to multiply its height (y2 - y1) by its width (x2 - x1)
    - You’ll also need to find the coordinates (xi1, yi1, xi2, yi2) of the intersection of two boxes. Remember that:
    - xi1 = maximum of the x1 coordinates of the two boxes
    - yi1 = maximum of the y1 coordinates of the two boxes
    - xi2 = minimum of the x2 coordinates of the two boxes
    - yi2 = minimum of the y2 coordinates of the two boxes

    In this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) the lower-right corner.

    # GRADED FUNCTION: ioudef iou(box1, box2):"""Implement the intersection over union (IoU) between box1 and box2Arguments:box1 -- first box, list object with coordinates (x1, y1, x2, y2)box2 -- second box, list object with coordinates (x1, y1, x2, y2)"""# Calculate the (y1, x1, y2, x2) coordinates of the intersection of box1 and box2. Calculate its Area.### START CODE HERE ### (≈ 5 lines)xi1 = max(box1[0], box2[0])yi1 = max(box1[1], box2[1])xi2 = min(box1[2], box2[2])yi2 = min(box1[3], box2[3])inter_area = (xi2 - xi1) * (yi2 - yi1) # print(xi1, yi1, xi2, yi2) # print(inter_area)### END CODE HERE ### # Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)### START CODE HERE ### (≈ 3 lines)box1_area = (box1[2] - box1[0]) * (box1[3] - box1[1])box2_area = (box2[2] - box2[0]) * (box2[3] - box2[1])union_area = box1_area + box2_area - inter_area # print(union_area)### END CODE HERE #### compute the IoU### START CODE HERE ### (≈ 1 line)iou = float(inter_area) / float(union_area)### END CODE HERE ###return iou box1 = (2, 1, 4, 3) box2 = (1, 2, 3, 4) print("iou = " + str(iou(box1, box2))) iou = 0.14285714285714285

    Expected Output:

    iou = 0.14285714285714285

    You are now ready to implement non-max suppression. The key steps are:
    1. Select the box that has the highest score.
    2. Compute its overlap with all other boxes, and remove boxes that overlap it more than iou_threshold.
    3. Go back to step 1 and iterate until there’s no more boxes with a lower score than the current selected box.

    This will remove all boxes that have a large overlap with the selected boxes. Only the “best” boxes remain.

    Exercise: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don’t actually need to use your iou() implementation):
    - tf.image.non_max_suppression()
    - K.gather()

    # GRADED FUNCTION: yolo_non_max_suppressiondef yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):"""Applies Non-max suppression (NMS) to set of boxesArguments:scores -- tensor of shape (None,), output of yolo_filter_boxes()boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)classes -- tensor of shape (None,), output of yolo_filter_boxes()max_boxes -- integer, maximum number of predicted boxes you'd likeiou_threshold -- real value, "intersection over union" threshold used for NMS filteringReturns:scores -- tensor of shape (, None), predicted score for each boxboxes -- tensor of shape (4, None), predicted box coordinatesclasses -- tensor of shape (, None), predicted class for each boxNote: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that thisfunction will transpose the shapes of scores, boxes, classes. This is made for convenience."""max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor# Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep### START CODE HERE ### (≈ 1 line)nms_indices = tf.image.non_max_suppression(boxes, scores, max_boxes, iou_threshold)### END CODE HERE #### Use K.gather() to select only nms_indices from scores, boxes and classes### START CODE HERE ### (≈ 3 lines)scores = K.gather(scores, nms_indices)boxes = K.gather(boxes, nms_indices)classes = K.gather(classes, nms_indices)### END CODE HERE ###return scores, boxes, classes with tf.Session() as test_b:scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)print("scores[2] = " + str(scores[2].eval()))print("boxes[2] = " + str(boxes[2].eval()))print("classes[2] = " + str(classes[2].eval()))print("scores.shape = " + str(scores.eval().shape))print("boxes.shape = " + str(boxes.eval().shape))print("classes.shape = " + str(classes.eval().shape)) scores[2] = 6.9384 boxes[2] = [-5.299932 3.13798141 4.45036697 0.95942086] classes[2] = -2.24527 scores.shape = (10,) boxes.shape = (10, 4) classes.shape = (10,)

    Expected Output:

    scores[2] 6.9384
    boxes[2] [-5.299932 3.13798141 4.45036697 0.95942086]
    classes[2] -2.24527
    scores.shape (10,)
    boxes.shape (10, 4)
    classes.shape (10,)

    2.4 Wrapping up the filtering

    It’s time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you’ve just implemented.

    Exercise: Implement yolo_eval() which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There’s just one last implementational detail you have to know. There’re a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided):

    boxes = yolo_boxes_to_corners(box_xy, box_wh)

    which converts the yolo box coordinates (x,y,w,h) to box corners’ coordinates (x1, y1, x2, y2) to fit the input of yolo_filter_boxes

    boxes = scale_boxes(boxes, image_shape)

    YOLO’s network was trained to run on 608x608 images. If you are testing this data on a different size image–for example, the car detection dataset had 720x1280 images–this step rescales the boxes so that they can be plotted on top of the original 720x1280 image.

    Don’t worry about these two functions; we’ll show you where they need to be called.

    # GRADED FUNCTION: yolo_evaldef yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):"""Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.Arguments:yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:box_confidence: tensor of shape (None, 19, 19, 5, 1)box_xy: tensor of shape (None, 19, 19, 5, 2)box_wh: tensor of shape (None, 19, 19, 5, 2)box_class_probs: tensor of shape (None, 19, 19, 5, 80)image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)max_boxes -- integer, maximum number of predicted boxes you'd likescore_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding boxiou_threshold -- real value, "intersection over union" threshold used for NMS filteringReturns:scores -- tensor of shape (None, ), predicted score for each boxboxes -- tensor of shape (None, 4), predicted box coordinatesclasses -- tensor of shape (None,), predicted class for each box"""### START CODE HERE ### # Retrieve outputs of the YOLO model (≈1 line)box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs# Convert boxes to be ready for filtering functions boxes = yolo_boxes_to_corners(box_xy, box_wh)# Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6)# Scale boxes back to original image shape.boxes = scale_boxes(boxes, image_shape)# Use one of the functions you've implemented to perform Non-max suppression with a threshold of iou_threshold (≈1 line)scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5)### END CODE HERE ###return scores, boxes, classes with tf.Session() as test_b:yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))scores, boxes, classes = yolo_eval(yolo_outputs)print("scores[2] = " + str(scores[2].eval()))print("boxes[2] = " + str(boxes[2].eval()))print("classes[2] = " + str(classes[2].eval()))print("scores.shape = " + str(scores.eval().shape))print("boxes.shape = " + str(boxes.eval().shape))print("classes.shape = " + str(classes.eval().shape)) scores[2] = 138.791 boxes[2] = [ 1292.32971191 -278.52166748 3876.98925781 -835.56494141] classes[2] = 54 scores.shape = (10,) boxes.shape = (10, 4) classes.shape = (10,)

    Expected Output:

    scores[2] 138.791
    boxes[2] [ 1292.32971191 -278.52166748 3876.98925781 -835.56494141]
    classes[2] 54
    scores.shape (10,)
    boxes.shape (10, 4)
    classes.shape (10,)


    Summary for YOLO:
    - Input image (608, 608, 3)
    - The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output.
    - After flattening the last two dimensions, the output is a volume of shape (19, 19, 425):
    - Each cell in a 19x19 grid over the input image gives 425 numbers.
    - 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture.
    - 85 = 5 + 80 where 5 is because (pc,bx,by,bh,bw)(pc,bx,by,bh,bw) has 5 numbers, and and 80 is the number of classes we’d like to detect
    - You then select only few boxes based on:
    - Score-thresholding: throw away boxes that have detected a class with a score less than the threshold
    - Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes
    - This gives you YOLO’s final output.

    3 - Test YOLO pretrained model on images

    In this part, you are going to use a pretrained model and test it on the car detection dataset. As usual, you start by creating a session to start your graph. Run the following cell.

    sess = K.get_session()

    3.1 - Defining classes, anchors and image shape.

    Recall that we are trying to detect 80 classes, and are using 5 anchor boxes. We have gathered the information about the 80 classes and 5 boxes in two files “coco_classes.txt” and “yolo_anchors.txt”. Let’s load these quantities into the model by running the next cell.

    The car detection dataset has 720x1280 images, which we’ve pre-processed into 608x608 images.

    class_names = read_classes("model_data/coco_classes.txt") anchors = read_anchors("model_data/yolo_anchors.txt") image_shape = (720., 1280.)

    3.2 - Loading a pretrained model

    Training a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. You are going to load an existing pretrained Keras YOLO model stored in “yolo.h5”. (These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the “YOLOv2” model, but we will more simply refer to it as “YOLO” in this notebook.) Run the cell below to load the model from this file.

    yolo_model = load_model("model_data/yolo.h5") /usr/local/lib/python3.5/dist-packages/keras/models.py:252: UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually.warnings.warn('No training configuration found in save file: '

    This loads the weights of a trained YOLO model. Here’s a summary of the layers your model contains.

    yolo_model.summary() __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) (None, 608, 608, 3) 0 __________________________________________________________________________________________________ conv2d_1 (Conv2D) (None, 608, 608, 32) 864 input_1[0][0] __________________________________________________________________________________________________ batch_normalization_1 (BatchNor (None, 608, 608, 32) 128 conv2d_1[0][0] __________________________________________________________________________________________________ leaky_re_lu_1 (LeakyReLU) (None, 608, 608, 32) 0 batch_normalization_1[0][0] __________________________________________________________________________________________________ max_pooling2d_1 (MaxPooling2D) (None, 304, 304, 32) 0 leaky_re_lu_1[0][0] __________________________________________________________________________________________________ conv2d_2 (Conv2D) (None, 304, 304, 64) 18432 max_pooling2d_1[0][0] __________________________________________________________________________________________________ batch_normalization_2 (BatchNor (None, 304, 304, 64) 256 conv2d_2[0][0] __________________________________________________________________________________________________ leaky_re_lu_2 (LeakyReLU) (None, 304, 304, 64) 0 batch_normalization_2[0][0] __________________________________________________________________________________________________ max_pooling2d_2 (MaxPooling2D) (None, 152, 152, 64) 0 leaky_re_lu_2[0][0] __________________________________________________________________________________________________ conv2d_3 (Conv2D) (None, 152, 152, 128 73728 max_pooling2d_2[0][0] __________________________________________________________________________________________________ batch_normalization_3 (BatchNor (None, 152, 152, 128 512 conv2d_3[0][0] __________________________________________________________________________________________________ leaky_re_lu_3 (LeakyReLU) (None, 152, 152, 128 0 batch_normalization_3[0][0] __________________________________________________________________________________________________ conv2d_4 (Conv2D) (None, 152, 152, 64) 8192 leaky_re_lu_3[0][0] __________________________________________________________________________________________________ batch_normalization_4 (BatchNor (None, 152, 152, 64) 256 conv2d_4[0][0] __________________________________________________________________________________________________ leaky_re_lu_4 (LeakyReLU) (None, 152, 152, 64) 0 batch_normalization_4[0][0] __________________________________________________________________________________________________ conv2d_5 (Conv2D) (None, 152, 152, 128 73728 leaky_re_lu_4[0][0] __________________________________________________________________________________________________ batch_normalization_5 (BatchNor (None, 152, 152, 128 512 conv2d_5[0][0] __________________________________________________________________________________________________ leaky_re_lu_5 (LeakyReLU) (None, 152, 152, 128 0 batch_normalization_5[0][0] __________________________________________________________________________________________________ max_pooling2d_3 (MaxPooling2D) (None, 76, 76, 128) 0 leaky_re_lu_5[0][0] __________________________________________________________________________________________________ conv2d_6 (Conv2D) (None, 76, 76, 256) 294912 max_pooling2d_3[0][0] __________________________________________________________________________________________________ batch_normalization_6 (BatchNor (None, 76, 76, 256) 1024 conv2d_6[0][0] __________________________________________________________________________________________________ leaky_re_lu_6 (LeakyReLU) (None, 76, 76, 256) 0 batch_normalization_6[0][0] __________________________________________________________________________________________________ conv2d_7 (Conv2D) (None, 76, 76, 128) 32768 leaky_re_lu_6[0][0] __________________________________________________________________________________________________ batch_normalization_7 (BatchNor (None, 76, 76, 128) 512 conv2d_7[0][0] __________________________________________________________________________________________________ leaky_re_lu_7 (LeakyReLU) (None, 76, 76, 128) 0 batch_normalization_7[0][0] __________________________________________________________________________________________________ conv2d_8 (Conv2D) (None, 76, 76, 256) 294912 leaky_re_lu_7[0][0] __________________________________________________________________________________________________ batch_normalization_8 (BatchNor (None, 76, 76, 256) 1024 conv2d_8[0][0] __________________________________________________________________________________________________ leaky_re_lu_8 (LeakyReLU) (None, 76, 76, 256) 0 batch_normalization_8[0][0] __________________________________________________________________________________________________ max_pooling2d_4 (MaxPooling2D) (None, 38, 38, 256) 0 leaky_re_lu_8[0][0] __________________________________________________________________________________________________ conv2d_9 (Conv2D) (None, 38, 38, 512) 1179648 max_pooling2d_4[0][0] __________________________________________________________________________________________________ batch_normalization_9 (BatchNor (None, 38, 38, 512) 2048 conv2d_9[0][0] __________________________________________________________________________________________________ leaky_re_lu_9 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_9[0][0] __________________________________________________________________________________________________ conv2d_10 (Conv2D) (None, 38, 38, 256) 131072 leaky_re_lu_9[0][0] __________________________________________________________________________________________________ batch_normalization_10 (BatchNo (None, 38, 38, 256) 1024 conv2d_10[0][0] __________________________________________________________________________________________________ leaky_re_lu_10 (LeakyReLU) (None, 38, 38, 256) 0 batch_normalization_10[0][0] __________________________________________________________________________________________________ conv2d_11 (Conv2D) (None, 38, 38, 512) 1179648 leaky_re_lu_10[0][0] __________________________________________________________________________________________________ batch_normalization_11 (BatchNo (None, 38, 38, 512) 2048 conv2d_11[0][0] __________________________________________________________________________________________________ leaky_re_lu_11 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_11[0][0] __________________________________________________________________________________________________ conv2d_12 (Conv2D) (None, 38, 38, 256) 131072 leaky_re_lu_11[0][0] __________________________________________________________________________________________________ batch_normalization_12 (BatchNo (None, 38, 38, 256) 1024 conv2d_12[0][0] __________________________________________________________________________________________________ leaky_re_lu_12 (LeakyReLU) (None, 38, 38, 256) 0 batch_normalization_12[0][0] __________________________________________________________________________________________________ conv2d_13 (Conv2D) (None, 38, 38, 512) 1179648 leaky_re_lu_12[0][0] __________________________________________________________________________________________________ batch_normalization_13 (BatchNo (None, 38, 38, 512) 2048 conv2d_13[0][0] __________________________________________________________________________________________________ leaky_re_lu_13 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_13[0][0] __________________________________________________________________________________________________ max_pooling2d_5 (MaxPooling2D) (None, 19, 19, 512) 0 leaky_re_lu_13[0][0] __________________________________________________________________________________________________ conv2d_14 (Conv2D) (None, 19, 19, 1024) 4718592 max_pooling2d_5[0][0] __________________________________________________________________________________________________ batch_normalization_14 (BatchNo (None, 19, 19, 1024) 4096 conv2d_14[0][0] __________________________________________________________________________________________________ leaky_re_lu_14 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_14[0][0] __________________________________________________________________________________________________ conv2d_15 (Conv2D) (None, 19, 19, 512) 524288 leaky_re_lu_14[0][0] __________________________________________________________________________________________________ batch_normalization_15 (BatchNo (None, 19, 19, 512) 2048 conv2d_15[0][0] __________________________________________________________________________________________________ leaky_re_lu_15 (LeakyReLU) (None, 19, 19, 512) 0 batch_normalization_15[0][0] __________________________________________________________________________________________________ conv2d_16 (Conv2D) (None, 19, 19, 1024) 4718592 leaky_re_lu_15[0][0] __________________________________________________________________________________________________ batch_normalization_16 (BatchNo (None, 19, 19, 1024) 4096 conv2d_16[0][0] __________________________________________________________________________________________________ leaky_re_lu_16 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_16[0][0] __________________________________________________________________________________________________ conv2d_17 (Conv2D) (None, 19, 19, 512) 524288 leaky_re_lu_16[0][0] __________________________________________________________________________________________________ batch_normalization_17 (BatchNo (None, 19, 19, 512) 2048 conv2d_17[0][0] __________________________________________________________________________________________________ leaky_re_lu_17 (LeakyReLU) (None, 19, 19, 512) 0 batch_normalization_17[0][0] __________________________________________________________________________________________________ conv2d_18 (Conv2D) (None, 19, 19, 1024) 4718592 leaky_re_lu_17[0][0] __________________________________________________________________________________________________ batch_normalization_18 (BatchNo (None, 19, 19, 1024) 4096 conv2d_18[0][0] __________________________________________________________________________________________________ leaky_re_lu_18 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_18[0][0] __________________________________________________________________________________________________ conv2d_19 (Conv2D) (None, 19, 19, 1024) 9437184 leaky_re_lu_18[0][0] __________________________________________________________________________________________________ batch_normalization_19 (BatchNo (None, 19, 19, 1024) 4096 conv2d_19[0][0] __________________________________________________________________________________________________ conv2d_21 (Conv2D) (None, 38, 38, 64) 32768 leaky_re_lu_13[0][0] __________________________________________________________________________________________________ leaky_re_lu_19 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_19[0][0] __________________________________________________________________________________________________ batch_normalization_21 (BatchNo (None, 38, 38, 64) 256 conv2d_21[0][0] __________________________________________________________________________________________________ conv2d_20 (Conv2D) (None, 19, 19, 1024) 9437184 leaky_re_lu_19[0][0] __________________________________________________________________________________________________ leaky_re_lu_21 (LeakyReLU) (None, 38, 38, 64) 0 batch_normalization_21[0][0] __________________________________________________________________________________________________ batch_normalization_20 (BatchNo (None, 19, 19, 1024) 4096 conv2d_20[0][0] __________________________________________________________________________________________________ space_to_depth_x2 (Lambda) (None, 19, 19, 256) 0 leaky_re_lu_21[0][0] __________________________________________________________________________________________________ leaky_re_lu_20 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_20[0][0] __________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 19, 19, 1280) 0 space_to_depth_x2[0][0] leaky_re_lu_20[0][0] __________________________________________________________________________________________________ conv2d_22 (Conv2D) (None, 19, 19, 1024) 11796480 concatenate_1[0][0] __________________________________________________________________________________________________ batch_normalization_22 (BatchNo (None, 19, 19, 1024) 4096 conv2d_22[0][0] __________________________________________________________________________________________________ leaky_re_lu_22 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_22[0][0] __________________________________________________________________________________________________ conv2d_23 (Conv2D) (None, 19, 19, 425) 435625 leaky_re_lu_22[0][0] ================================================================================================== Total params: 50,983,561 Trainable params: 50,962,889 Non-trainable params: 20,672 __________________________________________________________________________________________________

    Note: On some computers, you may see a warning message from Keras. Don’t worry about it if you do–it is fine.

    Reminder: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2).

    3.3 - Convert output of the model to usable bounding box tensors

    The output of yolo_model is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following cell does that for you.

    yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))

    You added yolo_outputs to your graph. This set of 4 tensors is ready to be used as input by your yolo_eval function.

    3.4 - Filtering boxes

    yolo_outputs gave you all the predicted boxes of yolo_model in the correct format. You’re now ready to perform filtering and select only the best boxes. Lets now call yolo_eval, which you had previously implemented, to do this.

    scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)

    3.5 - Run the graph on an image

    Let the fun begin. You have created a (sess) graph that can be summarized as follows:

  • yolo_model.input is given to yolo_model. The model is used to compute the output yolo_model.output
  • yolo_model.output is processed by yolo_head. It gives you yolo_outputs
  • yolo_outputs goes through a filtering function, yolo_eval. It outputs your predictions: scores, boxes, classes
  • Exercise: Implement predict() which runs the graph to test YOLO on an image.
    You will need to run a TensorFlow session, to have it compute scores, boxes, classes.

    The code below also uses the following function:

    image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))

    which outputs:
    - image: a python (PIL) representation of your image used for drawing boxes. You won’t need to use it.
    - image_data: a numpy-array representing the image. This will be the input to the CNN.

    Important note: when a model uses BatchNorm (as is the case in YOLO), you will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}.

    def predict(sess, image_file):"""Runs the graph stored in "sess" to predict boxes for "image_file". Prints and plots the preditions.Arguments:sess -- your tensorflow/Keras session containing the YOLO graphimage_file -- name of an image stored in the "images" folder.Returns:out_scores -- tensor of shape (None, ), scores of the predicted boxesout_boxes -- tensor of shape (None, 4), coordinates of the predicted boxesout_classes -- tensor of shape (None, ), class index of the predicted boxesNote: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes. """# Preprocess your imageimage, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))# Run the session with the correct tensors and choose the correct placeholders in the feed_dict.# You'll need to use feed_dict={yolo_model.input: ... , K.learning_phase(): 0})### START CODE HERE ### (≈ 1 line)out_scores, out_boxes, out_classes = sess.run([scores, boxes, classes], feed_dict={yolo_model.input:image_data, K.learning_phase():0})### END CODE HERE #### Print predictions infoprint('Found {} boxes for {}'.format(len(out_boxes), image_file))# Generate colors for drawing bounding boxes.colors = generate_colors(class_names)# Draw bounding boxes on the image filedraw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)# Save the predicted bounding box on the imageimage.save(os.path.join("out", image_file), quality=90)# Display the results in the notebookoutput_image = scipy.misc.imread(os.path.join("out", image_file))imshow(output_image)return out_scores, out_boxes, out_classes

    Run the following cell on the “test.jpg” image to verify that your function is correct.

    out_scores, out_boxes, out_classes = predict(sess, "test.jpg") Found 7 boxes for test.jpg car 0.60 (925, 285) (1045, 374) bus 0.67 (5, 267) (220, 407) car 0.68 (705, 279) (786, 351) car 0.70 (947, 324) (1280, 704) car 0.75 (159, 303) (346, 440) car 0.80 (762, 282) (942, 412) car 0.89 (366, 299) (745, 648)

    Expected Output:

    Found 7 boxes for test.jpg
    car 0.60 (925, 285) (1045, 374)
    car 0.66 (706, 279) (786, 350)
    bus 0.67 (5, 266) (220, 407)
    car 0.70 (947, 324) (1280, 705)
    car 0.74 (159, 303) (346, 440)
    car 0.80 (761, 282) (942, 412)
    car 0.89 (367, 300) (745, 648)

    The model you’ve just run is actually able to detect 80 different classes listed in “coco_classes.txt”. To test the model on your own images:
    1. Click on “File” in the upper bar of this notebook, then click “Open” to go on your Coursera Hub.
    2. Add your image to this Jupyter Notebook’s directory, in the “images” folder
    3. Write your image’s name in the cell above code
    4. Run the code and see the output of the algorithm!

    If you were to run your session in a for loop over all your images. Here’s what you would get:



    Predictions of the YOLO model on pictures taken from a camera while driving around the Silicon Valley
    Thanks drive.ai for providing this dataset!


    What you should remember:
    - YOLO is a state-of-the-art object detection model that is fast and accurate
    - It runs an input image through a CNN which outputs a 19x19x5x85 dimensional volume.
    - The encoding can be seen as a grid where each of the 19x19 cells contains information about 5 boxes.
    - You filter through all the boxes using non-max suppression. Specifically:
    - Score thresholding on the probability of detecting a class to keep only accurate (high probability) boxes
    - Intersection over Union (IoU) thresholding to eliminate overlapping boxes
    - Because training a YOLO model from randomly initialized weights is non-trivial and requires a large dataset as well as lot of computation, we used previously trained model parameters in this exercise. If you wish, you can also try fine-tuning the YOLO model with your own dataset, though this would be a fairly non-trivial exercise.

    References: The ideas presented in this notebook came primarily from the two YOLO papers. The implementation here also took significant inspiration and used many components from Allan Zelener’s github repository. The pretrained weights used in this exercise came from the official YOLO website.
    - Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi - You Only Look Once: Unified, Real-Time Object Detection (2015)
    - Joseph Redmon, Ali Farhadi - YOLO9000: Better, Faster, Stronger (2016)
    - Allan Zelener - YAD2K: Yet Another Darknet 2 Keras
    - The official YOLO website (https://pjreddie.com/darknet/yolo/)

    Car detection dataset:

    The Drive.ai Sample Dataset (provided by drive.ai) is licensed under a Creative Commons Attribution 4.0 International License. We are especially grateful to Brody Huval, Chih Hu and Rahul Patel for collecting and providing this dataset.

    總結

    以上是生活随笔為你收集整理的吴恩达深度学习课程deeplearning.ai课程作业:Class 4 Week 3 Car detection的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

    日本在线h| 国产r级在线观看 | 国产精品一级在线 | 92中文资源在线 | 亚洲三级黄色 | 午夜少妇 | 国产精品成人品 | 国产高清成人av | 免费观看一级视频 | 中文区中文字幕免费看 | 国产91九色蝌蚪 | 男女免费av | 九九热免费在线视频 | 日韩中文字幕视频在线 | 国产精品无av码在线观看 | 四虎影视成人精品国库在线观看 | 人人爽人人看 | av网址最新 | 国产91电影在线观看 | 久久久久欧美精品999 | 亚洲精品成人av在线 | 黄色av一级 | 国产偷在线 | 国产午夜麻豆影院在线观看 | 日韩精品免费一区二区在线观看 | 久久国产系列 | 五月婷婷操 | 婷婷激情综合 | 欧美日韩大片在线观看 | 福利视频网址 | 国产一级片观看 | 天天干天天操天天入 | www.黄色 | 国产精品亚洲精品 | 国产高清精品在线观看 | 亚洲电影网站 | 欧美91视频 | 香蕉视频色 | 免费精品视频 | 黄色高清视频在线观看 | 国产成人一区二区三区 | 黄色av免费看 | www.久久免费 | 99热这里只有精品8 久久综合毛片 | 成 人 黄 色 片 在线播放 | 国产一区在线看 | 午夜视频在线观看一区二区三区 | 欧美色综合 | 国产精品免费一区二区三区 | 草久久av | 欧美日韩国产一二三区 | 日韩丝袜在线观看 | 97国产超碰在线 | 成人av在线网 | 国产网站在线免费观看 | 国产高清一级 | 色丁香综合 | 黄色国产在线观看 | 91系列在线| 超碰个人在线 | 91九色在线观看视频 | 狠狠狠狠狠狠狠干 | 国产精品一区二区三区久久久 | 久久精品导航 | 国产精品久久久久久久久久妇女 | 国产视频在线免费观看 | 一区二区三区四区精品 | 国产视频一级 | 亚州精品在线视频 | 美女免费视频一区二区 | 国内精品免费久久影院 | 毛片1000部免费看 | 国产亚洲视频系列 | 日韩免费在线观看网站 | 国产最新在线 | 丰满少妇在线观看 | 在线看日韩av | 亚洲国产精品va在线看黑人动漫 | av福利在线免费观看 | 中文字幕一区二区三区在线视频 | av天天澡天天爽天天av | 曰本三级在线 | 丝袜美腿在线播放 | 高清精品久久 | 午夜婷婷综合 | 亚洲丝袜一区二区 | 欧美日韩在线免费视频 | 久久综合之合合综合久久 | 黄色小网站在线观看 | 久久这里精品视频 | 99视频在线免费播放 | 国产永久免费观看 | 激情视频在线高清看 | 波多野结衣网址 | 97精品伊人 | 欧美精品中文字幕亚洲专区 | 天天干.com| 在线日本看片免费人成视久网 | 91免费观看网站 | 色婷婷国产精品一区在线观看 | 中文字幕精| 在线不卡中文字幕播放 | 国产成人在线网站 | 九九精品视频在线观看 | 91大神一区二区三区 | 免费国产在线视频 | 亚洲 在线| 久久久一本精品99久久精品66 | 在线国产91| 天天天操操操 | 国产精品综合久久久久久 | 99热最新精品 | 久草电影网 | 成人精品一区二区三区中文字幕 | 国产精品久久免费看 | 国产黄色在线网站 | 国产精品3 | 亚洲免费国产视频 | 四虎影视4hu4虎成人 | 黄色成人免费电影 | 久久久www成人免费精品 | 国产91对白在线播 | 久久久久久国产精品亚洲78 | 久久综合色播五月 | 在线a人v观看视频 | 天天综合狠狠精品 | 亚av在线| 亚洲91精品 | 97精品在线视频 | 国产一区二区在线观看免费 | 五月天丁香视频 | 免费在线播放黄色 | 国产高清在线观看 | 午夜精品av | 黄色的网站免费看 | 天天爱天天操天天干 | 丁香婷婷在线观看 | 亚洲精品久久在线 | 国产原创在线 | 天天操操操操操操 | 日本精品午夜 | 99中文字幕在线观看 | 国产精品va在线观看入 | 91精品999 | 在线观看视频福利 | 香蕉视频国产在线 | 人人澡人人澡人人 | 欧美日韩精品在线 | 日韩精品免费在线视频 | 中文字幕精品一区二区三区电影 | 色国产精品一区在线观看 | 999久久久久久久久 69av视频在线观看 | 91一区啪爱嗯打偷拍欧美 | 国内精品视频久久 | 视色网站 | 欧美日韩午夜 | 在线视频手机国产 | 精品久久久一区二区 | 精品国产一区二区三区四区vr | 成人在线黄色电影 | av激情五月| 亚洲欧美在线综合 | 人人舔人人干 | av视屏在线 | 天天撸夜夜操 | 午夜aaaa| 久久久久在线观看 | 91麻豆精品国产91久久久无限制版 | 精品国产欧美 | 在线免费性生活片 | 青青草在久久免费久久免费 | 97超碰人人网| 国产精品福利小视频 | 久久久久国产一区二区 | 国产精品毛片 | 69久久夜色精品国产69 | 亚洲国产精品日韩 | 亚洲免费在线观看视频 | 亚洲激情 在线 | 91入口在线观看 | 四虎在线免费观看 | 欧美一二三视频 | 亚洲三级黄色 | 免费进去里的视频 | 亚洲激精日韩激精欧美精品 | 国产精品自产拍在线观看 | 视频在线观看入口黄最新永久免费国产 | av网站在线免费观看 | 精品一区二区免费 | 美女黄频在线观看 | 国产伦精品一区二区三区四区视频 | 亚洲经典中文字幕 | 正在播放国产精品 | 久久久久国产一区二区三区四区 | 五月天亚洲综合 | 韩国av免费在线 | 青青河边草手机免费 | 久久久久99精品国产片 | 国产va精品免费观看 | 国产精品久久久久久久久费观看 | 国内精品视频在线 | 国产婷婷色 | 五月天激情综合 | 国产精品国产三级国产不产一地 | 久久精品aaa | 日韩va在线观看 | 久久国语露脸国产精品电影 | 国产日韩精品一区二区三区 | 成人免费视频网站在线观看 | 青草视频免费观看 | 在线免费av播放 | 日韩av成人 | 久久在线| 国产精品高| 亚洲婷婷丁香 | 中文字幕在线视频一区 | 久久综合狠狠综合 | 激情综合站| 国产精品免费在线 | 99热这里只有精品久久 | 成年人在线观看 | a v在线视频 | 成人在线免费观看视视频 | 久热色超碰 | 国产一级电影网 | 久久久久国产一区二区三区 | 国产精品免费视频网站 | 亚洲电影成人 | 毛片随便看 | 午夜丁香网 | 国产专区精品视频 | 国产理论在线 | 日韩.com | www.久久99| 婷婷六月中文字幕 | 国产69久久精品成人看 | 国产一级二级三级在线观看 | 九九在线播放 | 国产一区二区三区在线免费观看 | 丁香六月中文字幕 | 成人av免费看 | 国产一区视频在线观看免费 | 国产成人高清 | 欧美成人免费在线 | 日韩中文字幕第一页 | 久操97| 一区二区在线电影 | 国产精品久久久影视 | 日韩av一区二区在线影视 | 日韩精品一区二区三区外面 | 激情综合网五月 | 国产a级片免费观看 | 日本久久电影网 | 综合色站导航 | 国产成人61精品免费看片 | 国产剧情久久 | 国产精品久久久久久久久久尿 | 欧美日韩另类在线 | 国产在线精品区 | 黄色av网站在线观看 | 欧美日韩一区二区在线观看 | 五月婷婷欧美 | 国产香蕉视频 | 欧美日韩视频在线观看一区二区 | 97精品国产手机 | 国内精品久久久久久久久久 | 欧美激情第十页 | 夜夜操天天摸 | 伊人天天狠天天添日日拍 | 亚洲天堂网在线视频观看 | 亚洲国产小视频在线观看 | 日韩资源视频 | 免费观看的av网站 | 在线观看视频免费播放 | 日韩中文字幕视频在线观看 | 欧美激情精品 | 最近乱久中文字幕 | 欧美黑人性猛交 | 在线观看一区视频 | www亚洲一区 | 国产精品麻豆果冻传媒在线播放 | 一本一道久久a久久精品 | 91一区啪爱嗯打偷拍欧美 | 久草精品视频 | 国产精品成人自产拍在线观看 | 国产日韩在线观看一区 | 日韩艹| 国产精品免费视频观看 | 九九九九九九精品任你躁 | 国产香蕉视频在线观看 | 婷婷色中文 | 亚洲精品91天天久久人人 | www.亚洲视频 | 国产伦精品一区二区三区在线 | 亚洲精品免费在线观看 | 青青草国产成人99久久 | 97香蕉久久国产在线观看 | 九九九热精品免费视频观看网站 | 久久露脸国产精品 | 天天射天天爽 | 久草视频免费 | 中文字幕在线观看第一区 | 黄色一集片 | 九九色在线观看 | 91精品少妇偷拍99 | 91久久精品日日躁夜夜躁国产 | 2020天天干夜夜爽 | 国产成人三级一区二区在线观看一 | 国产精品成人自产拍在线观看 | 欧美视频99 | 中文字幕视频观看 | 国产69精品久久app免费版 | www狠狠 | 91原创在线观看 | 激情综合五月天 | 亚洲精品久久久久久久蜜桃 | 欧美大片在线观看一区 | 日日日爽爽爽 | 99国产精品视频免费观看一公开 | 亚洲综合五月天 | 国产精品久久婷婷六月丁香 | 狠狠干夜夜操 | 五月婷在线观看 | 午夜三级在线 | 国产日产av | 日韩高清一区二区 | 在线观看中文字幕2021 | 9在线观看免费高清完整版在线观看明 | 日韩v欧美v日本v亚洲v国产v | 日日日网| 国产精品综合久久久 | 一区二区三区精品久久久 | 久草免费手机视频 | 在线观看韩国av | 欧美日韩免费在线观看视频 | 国产精品久久婷婷六月丁香 | 蜜臀91丨九色丨蝌蚪老版 | 九九九热精品免费视频观看网站 | 在线成人免费电影 | 97色婷婷 | 丁香婷婷激情五月 | av网站免费线看精品 | 精品国产一区二区三区四区在线观看 | 手机av片| 免费在线成人av | 片黄色毛片黄色毛片 | 日韩精品亚洲专区在线观看 | 久久黄色成人 | 亚洲黄色片在线 | 狠狠狠狠狠狠狠狠干 | www.国产视频| 亚洲精品一区二区三区高潮 | 91麻豆免费视频 | 日韩av资源在线观看 | 亚洲国产操| 精品国产aⅴ一区二区三区 在线直播av | 免费黄色一区 | 天天色天天操天天爽 | 久久久精品国产一区二区 | 国产精品18久久久久久首页狼 | 日韩欧美一区二区三区视频 | 超碰.com| 国产精品av免费 | 天天干,天天射,天天操,天天摸 | 激情五月在线视频 | 日韩免费在线观看视频 | 免费69视频 | 97看片吧 | 丁香国产视频 | 久章草在线观看 | 丰满少妇久久久 | 欧美一进一出抽搐大尺度视频 | 天天操天天怕 | 欧美另类v | 国产亚洲欧美精品久久久久久 | 国产欧美高清 | 亚洲一二三在线 | 国产一区在线视频播放 | av中文字幕不卡 | 精品久久久久久亚洲综合网站 | 狠狠干成人综合网 | 免费在线电影网址大全 | 午夜精品久久久久久久99 | av在线激情 | 亚洲精品一区二区在线观看 | 国产在线观看你懂的 | 911久久 | 色搞搞 | 又黄又刺激的网站 | 亚洲综合视频在线 | 国产91精品一区二区麻豆网站 | 久久久久久免费 | 天天操天天透 | 91午夜精品| 91三级视频 | 色综合久久久久综合体 | 91精品国产电影 | 在线观看v片 | 国产精品一区二区久久 | 成人av免费网站 | 欧洲一区二区在线观看 | 在线91网 | 日韩av片在线 | 成人a免费看 | 婷婷久久网 | 久久激情综合 | 欧美午夜激情网 | 久久午夜精品视频 | 日本中文字幕在线看 | 97香蕉久久国产在线观看 | 日日干天天爽 | 97av在线 | 天天干,天天射,天天操,天天摸 | 黄色大片免费播放 | 欧洲激情在线 | 亚洲成a人片在线观看中文 中文字幕在线视频第一页 狠狠色丁香婷婷综合 | 九九免费精品 | 国产精品一区二区三区久久 | 狠狠gao| 精品国产一区二区三区在线观看 | 中文字幕免费高 | 国语黄色片 | 欧洲一区二区三区精品 | 99精品在线播放 | 亚洲精品av在线 | 男女啪啪免费网站 | 在线免费观看麻豆视频 | 在线观看完整版免费 | 五月在线 | 欧美日韩高清一区二区 国产亚洲免费看 | 五月婷婷欧美视频 | 麻豆系列在线观看 | 久久综合中文色婷婷 | 中文字幕免费播放 | 欧美精品一二 | 国产精品欧美日韩 | 99视频偷窥在线精品国自产拍 | 一级性生活片 | 亚洲欧美在线观看视频 | 久久久久日本精品一区二区三区 | 9999亚洲| 国产视频精品免费播放 | 亚洲婷婷免费 | 国产高清在线a视频大全 | 激情亚洲综合在线 | 麻豆精品视频 | 婷婷久久一区二区三区 | 五月视频| 91女子私密保健养生少妇 | 久久成年人 | 国产看片免费 | 久久蜜桃av | 麻豆精品传媒视频 | 国产成人精品久久久 | 天天综合网天天 | 精品福利网 | 国产精品久久99 | 亚洲精品乱码 | 久久热亚洲 | 久久久久免费精品国产 | 国产精品区在线观看 | 五月婷婷激情 | 久久成人国产精品 | 日本中文字幕高清 | 日日夜夜天天综合 | 亚洲狠狠婷婷综合久久久 | 91精品国产乱码 | 美女视频黄频 | 激情欧美xxxx | 日韩欧美视频一区二区三区 | 久久影院精品 | 99tvdz@gmail.com| 国产又粗又猛又爽又黄的视频先 | 久久大片| 亚洲精品久久久久999中文字幕 | 波多野结衣在线播放视频 | 欧美精品一区在线 | 中文av日韩 | 99视频在线精品免费观看2 | 伊人伊成久久人综合网站 | 人人搞人人干 | 成人一区不卡 | 色综合天天综合在线视频 | 高清中文字幕 | 91免费黄视频 | 中文在线免费视频 | 午夜精品久久一牛影视 | 国产精品一区久久久久 | 久久精品视频免费观看 | 粉嫩av一区二区三区四区在线观看 | 天天射天天爱天天干 | 国产精品夜夜夜一区二区三区尤 | 久久久久免费看 | 精品国产一区二区三区日日嗨 | 国产精品私人影院 | 国产高清成人av | 91免费网址| 天天干,天天操 | 午夜精品电影 | 嫩模bbw搡bbbb搡bbbb | www视频在线免费观看 | www久久99 | 婷婷性综合 | 激情综合亚洲精品 | 91精品免费在线 | 最新日韩电影 | 在线中文字幕网站 | 91电影福利 | 精品高清美女精品国产区 | 色综合天天综合网国产成人网 | 日本黄色黄网站 | 亚洲综合色视频在线观看 | 久草在线91| 99久久精品费精品 | 国产一区二区三区在线免费观看 | 久草爱视频| 91亚洲激情 | 日韩两性视频 | 日本精品va在线观看 | 国产高清免费观看 | 久久久久亚洲精品中文字幕 | 99国产成+人+综合+亚洲 欧美 | 免费日韩 | 国产精品福利久久久 | 精品国产欧美 | 日韩精品中文字幕在线不卡尤物 | 热99在线| 国产字幕在线观看 | 亚洲精品日韩av | 人人插人人草 | 精品一区二区在线看 | 日本韩国中文字幕 | 免费看av片网站 | 欧美男同网站 | a级国产毛片| 欧女人精69xxxxxx | 一级黄色免费 | 91视频免费播放 | av千婊在线免费观看 | 天天摸天天操天天舔 | ,午夜性刺激免费看视频 | 亚洲无吗av| 在线免费观看黄色小说 | 亚洲精品 在线视频 | 久青草视频在线观看 | 亚洲黄色激情小说 | 国产91免费在线观看 | 亚洲综合色婷婷 | 亚洲精品在线资源 | 亚洲国产精品电影 | 俺要去色综合狠狠 | 国产+日韩欧美 | 国色天香在线 | 欧美久久久久久久久久久久 | 波多野结衣电影一区二区 | 国产午夜三级一区二区三 | 国产成人精品一区二区 | 成人黄色在线观看视频 | 欧美日韩视频 | 欧美日本一二三 | 五月婷婷香蕉 | 美女视频黄免费的久久 | 日日夜夜噜| 黄网站色成年免费观看 | а中文在线天堂 | 一区在线播放 | 中文免费 | 日韩av在线影视 | 日韩r级在线 | 三上悠亚一区二区在线观看 | 国产免费中文字幕 | 国产一级大片在线观看 | 国产第一页精品 | 国产又粗又长又硬免费视频 | 五月天激情婷婷 | 国产精品久久久久高潮 | 国产不卡精品视频 | 国产高清在线a视频大全 | 超碰人人射| 九九免费在线观看视频 | 日韩一区二区三区免费视频 | 亚洲婷婷网| 亚洲精品午夜视频 | 国产黄色片久久 | 国产美女网站视频 | 日韩中文字幕在线 | 伊人久在线 | 麻豆视频在线 | 久久久午夜精品理论片中文字幕 | 国内成人精品2018免费看 | 久久成年人视频 | 国产视频精品久久 | 亚洲欧美视频在线播放 | 国产高清在线 | 在线视频 影院 | 色丁香综合 | 国产高清av免费在线观看 | 欧美日韩国产一区 | 午夜.dj高清免费观看视频 | 国内视频 | 丁香六月久久综合狠狠色 | 国产在线观看,日本 | 日韩在线大片 | 五月天激情在线 | 一区二区三区不卡在线 | 久久精品在线免费观看 | 在线免费观看国产黄色 | 国产亚洲精品久 | 黄色不卡av | 日日夜夜av | 久久九精品 | www91在线观看 | 91超碰在线播放 | 久久免费视频精品 | 深爱激情亚洲 | 在线观看资源 | 亚洲精品系列 | 精品久久久久国产 | 欧美a级免费视频 | 天天干,天天操,天天射 | 欧美日韩久久一区 | 人成午夜视频 | 国产麻豆果冻传媒在线观看 | 97在线超碰 | 日日夜夜天天干 | 日本不卡一区二区三区在线观看 | 在线电影91 | 综合黄色网 | 岛国精品一区二区 | 久久五月婷婷丁香社区 | 91av在线免费观看 | 午夜视频播放 | 国产一级黄色片免费看 | 国产91欧美 | 欧美另类高潮 | 狠狠躁夜夜躁人人爽超碰97香蕉 | 在线免费观看一区二区三区 | 亚洲精品国产品国语在线 | 中文字幕影视 | 成年人免费电影在线观看 | 国产黄色大片 | 免费观看不卡av | 中文字幕三区 | 久久久婷 | 欧美成人视 | 狠狠狠狠狠狠狠狠 | 九色在线视频 | 综合网天天 | 亚洲成人精品久久久 | 久久久麻豆精品一区二区 | 激情综合一区 | 国产一区二区精品久久 | 亚洲精品乱码 | 美女视频黄网站 | 国内成人精品2018免费看 | 色噜噜狠狠狠狠色综合久不 | 天天射射天天 | va视频在线观看 | 激情五月婷婷综合 | 狠狠色丁香久久婷婷综合丁香 | 九九热1 | 日本一区二区高清不卡 | 亚洲精品国偷拍自产在线观看蜜桃 | 日本丰满少妇免费一区 | 爱爱一区 | 欧美一区二区三区免费观看 | 国产精品99在线播放 | 美女免费电影 | 日本在线视频网址 | 国产午夜精品久久久久久久久久 | 96久久 | 夜夜高潮夜夜爽国产伦精品 | 国产视频一二区 | 五月激情久久 | 在线看国产 | 欧美日韩不卡一区 | 97视频在线免费播放 | 四虎国产视频 | 日韩激情视频在线观看 | 亚洲精品视频在线观看免费 | 日本性生活一级片 | 999成人 | 中文字幕色网站 | 亚洲精品影视 | 在线韩国电影免费观影完整版 | 欧美一区二区三区四区夜夜大片 | 99久高清在线观看视频99精品热在线观看视频 | 国产香蕉在线 | 国产一区不卡在线 | 国产一区二区三区免费视频 | 国产午夜精品久久久久久久久久 | 精品免费观看 | 香蕉影视 | 亚洲精品视频网址 | 在线播放国产一区二区三区 | 日本性xxxxx| 天天操综合网 | 91精品视频观看 | 日日干夜夜骑 | 午夜国产在线 | 天天干 天天摸 天天操 | 97超碰在线资源 | 久草在线看片 | 久久久综合九色合综国产精品 | 久久精品免费观看 | 精品国产美女在线 | 日本精品一区二区三区在线观看 | 91精品91 | 免费麻豆网站 | 一区在线免费观看 | 天天操天天干天天玩 | 99欧美精品 | 免费av大片| 国产成人一级 | 丁香五香天综合情 | 91丨九色丨蝌蚪丨老版 | 狠狠色2019综合网 | 久久tv视频 | 玖玖在线资源 | 中文字幕中文字幕 | 91资源在线 | 在线观看韩日电影免费 | 激情欧美一区二区三区免费看 | 91免费高清观看 | 最新超碰在线 | 欧美日韩有码 | 婷婷激情五月综合 | 91中文字幕在线 | 日韩精品播放 | 日韩av电影网站在线观看 | 国产精品12345| 蜜臀久久99精品久久久酒店新书 | 欧美亚洲成人免费 | 国产伦理久久精品久久久久_ | av成人免费网站 | 国产五月婷 | 久久久在线视频 | 日韩av快播电影网 | 99热这里只有精品免费 | 免费看污的网站 | 日日日日| 深爱激情五月婷婷 | 国产精品第一页在线观看 | 99久久婷婷国产一区二区三区 | 久久精品久久精品 | 色播亚洲婷婷 | 久久经典国产视频 | 国产一区精品在线 | 精品国偷自产国产一区 | 国产尤物在线 | www欧美xxxx | 国产美女久久久 | 天天色官网| 国产又黄又硬又爽 | 天天拍天天操 | 日韩精品一区在线播放 | 亚洲男女精品 | 92国产精品久久久久首页 | 四虎永久网站 | 色综合天天天天做夜夜夜夜做 | 日韩女同av | 亚洲片在线观看 | 久久综合色综合88 | 色婷婷国产精品一区在线观看 | 911香蕉视频 | 久久久久久毛片精品免费不卡 | 99热这里只有精品1 av中文字幕日韩 | 一区在线观看视频 | 激情欧美一区二区免费视频 | 国产精品久久久久久模特 | 国产色爽| 久久久久久久国产精品视频 | 亚洲国产wwwccc36天堂 | 久久久99精品免费观看 | 性色av香蕉一区二区 | 久久精品国产免费看久久精品 | 五月天婷婷综合 | 国产日韩在线一区 | 久久精品麻豆 | 免费看的黄色片 | a级成人毛片 | 欧美在线不卡一区 | 91高清在线 | www.国产精品 | 成人免费大片黄在线播放 | 伊人久久婷婷 | 免费无遮挡动漫网站 | 久久视屏网 | 日韩免费观看一区二区 | 亚洲日本韩国一区二区 | 一区二区三区中文字幕在线观看 | 黄色免费高清视频 | a在线视频v视频 | 在线观看视频99 | 久久视频精品在线 | www.黄色网.com | 色吊丝在线永久观看最新版本 | 综合婷婷丁香 | 一二三久久久 | 91精品视频免费看 | 午夜视频免费在线观看 | 中文字幕视频网站 | 欧美成人精品三级在线观看播放 | 国产中的精品av小宝探花 | 亚洲色图激情文学 | 日b视频在线观看网址 | 久久a免费视频 | 中文字幕专区高清在线观看 | 伊人久久国产精品 | 久久综合精品国产一区二区三区 | 国产精品久免费的黄网站 | 天天干夜夜夜操天 | 亚洲天堂网在线观看视频 | 久久综合色综合88 | 日韩免费视频播放 | 在线观看国产麻豆 | 天天天干夜夜夜操 | 99精品欧美一区二区三区 | 色资源中文字幕 | 午夜国产一区二区三区四区 | 午夜国产一区二区三区四区 | 久久久亚洲麻豆日韩精品一区三区 | 亚洲精品视频在线播放 | 亚洲人成精品久久久久 | 免费网站观看www在线观看 | 在线免费观看欧美日韩 | 成人91视频 | 午夜私人影院久久久久 | 免费看一级黄色大全 | 婷婷成人亚洲综合国产xv88 | 国产在线久久久 | 久草免费手机视频 | 又黄又爽又刺激视频 | 久99视频| 中国黄色一级大片 | 久久久久久国产一区二区三区 | 日本黄色大片儿 | 456免费视频 | 国产精品久久久久影院日本 | 婷婷干五月 | 中文字幕亚洲情99在线 | 日韩av手机在线看 | 国产精品6| 欧美一区在线观看视频 | 狠狠色丁香九九婷婷综合五月 | 91精品免费在线观看 | 九九在线视频免费观看 | 精品国产a | 精品在线观看一区二区 | 免费在线a| 国产中文在线播放 | 永久免费毛片 | 国产午夜精品在线 | 欧美成人a在线 | 丁香婷婷激情 | 亚洲视频中文 | 中文字幕专区高清在线观看 | 亚洲精品视频二区 | 色偷偷网站视频 | 有码中文在线 | 日韩欧美一区二区在线观看 | 狠狠狠干 | 久草精品在线观看 | 一区二区中文字幕在线播放 | 亚洲精品午夜国产va久久成人 | 亚洲精品va | 9在线观看免费高清完整版 玖玖爱免费视频 | 精品国产视频在线 | 91午夜精品 | 在线v片免费观看视频 | 91色一区二区三区 | 91精品资源 | 99久久网站 | 精品一二三四五区 | 97精品国产aⅴ | 成人av在线直播 | 免费精品在线视频 | 狠色狠色综合久久 | 九九色网 | 国产精品免费看久久久8精臀av | 婷婷丁香色 | 久草在线视频网站 | 99在线观看视频网站 | 精品国产欧美一区二区 | 国产日本在线 | 欧美一二三区在线播放 | 少妇高潮流白浆在线观看 | 国产生活一级片 | 色狠狠操| 亚洲专区一二三 | 美女视频黄频大全免费 | 久久久久国产精品免费 | 久久手机精品视频 | 日本黄色黄网站 | 国产高清不卡 | 一区免费视频 | 日韩女同一区二区三区在线观看 | 日韩午夜精品福利 | 国产成人一级 | 麻豆成人网 | 亚洲 欧洲av | 91视频 - 114av| 综合网天天射 | 久久永久免费 | 亚洲无吗av | av成人动漫在线观看 | 大型av综合网站 | a久久久久久 | 4438全国亚洲精品在线观看视频 | 国产成人在线免费观看 | 狠狠干电影 | 黄色中文字幕 | 伊人电影天堂 | 欧美精品久久久久久久久老牛影院 | 丁香资源影视免费观看 | www.午夜视频 | 99色人 | 国产天天综合 | 99久久精品国产亚洲 | 欧洲精品亚洲精品 | 999久久久久久久久 69av视频在线观看 | 成人av观看 | 天天艹| 丁香视频全集免费观看 | 日本精品在线看 | 日本精品视频在线 | 中文字幕 91| 欧美日韩二区在线 | 久久天堂网站 | 亚洲精品免费观看 | 色在线视频网 | 亚洲国产成人精品久久 | av黄色影院| 欧美日韩国产一区二 | 久久久穴 | 毛片在线播放网址 | 国产伦精品一区二区三区照片91 | 99在线看 | 久久精品香蕉视频 | 99视频在线观看一区三区 | av在线电影网站 | 国产精品久久久久久久久久久久午夜片 | 国产一级黄色电影 | 日韩美在线观看 | 精品久久一级片 | 国产色视频网站2 | 日本一区二区三区视频在线播放 | 欧美精品久久久久久久 | 天天操导航 | 色五婷婷| av电影免费在线看 | 国产精品一区二区三区免费视频 | 西西444www大胆高清图片 | 新版资源中文在线观看 | 日本久久高清视频 | 亚洲第一区在线播放 | 久久精品一二三区白丝高潮 | 亚洲尺码电影av久久 | 丁香婷婷色综合亚洲电影 | 国产亚洲成人网 | 婷婷视频在线 | 精品国产伦一区二区三区观看方式 | 成人综合婷婷国产精品久久免费 | 国产日本三级 | 午夜视频在线观看一区二区三区 | 夜夜操夜夜干 | 黄网站色成年免费观看 | 国产91免费在线观看 | 国产美女精品视频 | 亚洲精品无 | 69av久久| 国产视频在线免费 | 日本久久综合网 | 久久久成人精品 | 成人在线播放免费观看 | 久久精品99久久久久久2456 | 狂野欧美激情性xxxx | 欧美午夜久久 | 69国产成人综合久久精品欧美 | 成年人视频免费在线播放 | 国产成人一区二区三区电影 | 婷婷丁香七月 | 国产美女主播精品一区二区三区 | www.天天色.com| 97超碰中文字幕 | 99色视频在线 | 久久免费电影网 | 欧美一级视频在线观看 | 97视频在线观看视频免费视频 | av在线免费播放网站 | 日韩欧美高清一区二区 | 欧美va天堂va视频va在线 | 国产精品免费看久久久8精臀av | 在线观看国产日韩 | 亚欧日韩成人h片 | 女人18精品一区二区三区 | 日韩精品欧美精品 | 精品久久久久久亚洲 |