日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

计算机视觉知识基础_我见你:计算机视觉基础知识

發(fā)布時間:2023/12/15 编程问答 36 豆豆
生活随笔 收集整理的這篇文章主要介紹了 计算机视觉知识基础_我见你:计算机视觉基础知识 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

計算機視覺知識基礎(chǔ)

My introduction to Computer Vision happened in 2017 when I was doing Self-driving Car Nanodegree from Udacity. The first semester was mainly related to Computer Vision and Deep Learning which sparked my interest in the subject. This post would cover some basic introduction of Computer Vision as well as Camera Calibration and affine transformations.

我對計算機視覺的介紹發(fā)生在2017年,當(dāng)時我在Udacity進(jìn)行自動駕駛汽車納米學(xué)位課程。 第一學(xué)期主要與計算機視覺和深度學(xué)習(xí)有關(guān),這激發(fā)了我對該學(xué)科的興趣。 這篇文章將涵蓋計算機視覺的一些基本介紹,以及相機校準(zhǔn)和仿射變換。

The goal of computer vision is to aid machines to see and understand the content of digital images. It deals with perceiving and understanding the world around you through images. Each digital image is made up of different pixels which are the smallest building blocks for an image. Mathematically, it's these pixels that contain different values for different features -colors. A simplified example would be an image in an RGB color scheme with every pixel containing values of Red, Green, Blue. In this case, the image can be seen as a matrix whose values can be utilized by different algorithms. A video stream is just a collection of different 2D images played over time.

計算機視覺的目的是幫助機器查看和理解數(shù)字圖像的內(nèi)容。 它通過圖像來感知和理解您周圍的世界。 每個數(shù)字圖像都由不同的像素組成,這些像素是圖像的最小組成部分。 在數(shù)學(xué)上,正是這些像素針對不同的功能-顏色包含了不同的值。 一個簡化的示例是采用RGB配色方案的圖像,其中每個像素都包含Red,Green,Blue的值。 在這種情況下,圖像可以看作是矩陣,其值可以被不同的算法利用。 視頻流只是隨時間推移播放的不同2D圖像的集合。

Different algorithms can be used to extract information from images and videos. These algorithms might look at different features in the image and apply different techniques:

可以使用不同的算法從圖像和視頻中提取信息。 這些算法可能會查看圖像中的不同特征并應(yīng)用不同的技術(shù):

  • Colour Detection- Different colors are coded differently mathematically.

    顏色檢測-不同的顏色在數(shù)學(xué)上進(jìn)行了不同的編碼。

  • Edge detection: edge detection helps the computer to distinguish between different object shapes, sizes, etc.

    邊緣檢測 :邊緣檢測可幫助計算機區(qū)分不同的物體形狀,大小等。

  • Masking/unmasking: only using the specified area of interest. For example, if you are looking for lane lines from dashcam, you might only want to look lower half of the image

    遮罩/取消遮罩 :僅使用指定的關(guān)注區(qū)域。 例如,如果您正在尋找行車記錄儀的車道線,則可能只想看圖像的下半部分

  • Shape and feature extraction: Using colors and shapes to identify objects

    形狀和特征提取 :使用顏色和形狀識別對象

  • Machine/deep learning: It can also use different features to learn itself about different objects etc.

    機器/深度學(xué)習(xí) :它還可以使用不同的功能來了解有關(guān)不同對象等的自身。

How to apply these techniques/algorithms? There are different libraries but OpenCV is one of the most versatile and widely used. Its open-source, originally developed by Intel and support various programming platforms like Python, C++, etc. It is well documented and as a result of a large user base, online help is readily available.

如何應(yīng)用這些技術(shù)/算法? 有許多不同的庫,但是OpenCV是功能最豐富且使用最廣泛的庫之一。 它的開放源代碼最初是由Intel開發(fā)的,并且支持各種編程平臺,例如Python,C ++等。該文檔有據(jù)可查,并且由于龐大的用戶群,可以輕松獲得在線幫助。

These algorithms and techniques then can used for various Computer Vision tasks such as:

這些算法和技術(shù)隨后可用于各種計算機視覺任務(wù),例如:

  • Object Classification: What broad category of object is in this image?

    對象分類 :此圖像中對象的大致類別是什么?

  • Object Identification: Which type of a given object is in this image?

    對象識別 :此圖像中給定對象的類型是什么?

  • Object Verification: Is the object in the image?

    對象驗證 :圖像中是否有對象?

  • Object Detection: Where are the objects in the image?

    對象檢測 :圖像中的對象在哪里?

  • Object Landmark Detection: What are the key points for the object in the image?

    對象地標(biāo)檢測 :圖像中對象的關(guān)鍵點是什么?

  • Object Segmentation: What pixels belong to the object in the image?

    對象分割 :圖像中的對象屬于哪些像素?

  • Object Recognition: What objects are in this image and where are they?

    對象識別 :此圖像中有哪些對象,它們在哪里?

But before these algorithms can be applied, some image processing is required. Image processing is an integral part of computer vision. The images are preprocessed :

但是在應(yīng)用這些算法之前,需要進(jìn)行一些圖像處理。 圖像處理是計算機視覺的組成部分。 圖像經(jīng)過預(yù)處理:

  • to preprocess the image for the algorithm

    為算法預(yù)處理圖像
  • to clean up the image or a dataset for algorithm to use

    清理圖像或數(shù)據(jù)集以供算法使用
  • to generate new images for the machine/deep learning also to use

    生成新圖像以供機器/深度學(xué)習(xí)使用
  • To better understand the scene, by using say perspective transform

    為了更好地了解場景,請使用“透視變換”

Camera Model

相機型號

The image itself coming out of any camera must first be undistorted as a first step. Image distortion occurs when a camera looks at 3D objects in the real world and transforms them into a 2D image; this transformation isn’t perfect. Distortion actually changes what the shape and size of these 3D objects appear to be. So, the first step in analysing camera images, is to undo this distortion so that you can get correct and useful information out of them.

第一步,必須首先使任何攝像機發(fā)出的圖像本身不失真。 當(dāng)相機觀看現(xiàn)實世界中的3D對象并將其轉(zhuǎn)換為2D圖像時,就會發(fā)生圖像失真。 這種轉(zhuǎn)變并不完美。 失真實際上會改變這些3D對象的形狀和大小。 因此,分析攝像機圖像的第一步是消除這種失真,以便從中獲得正確和有用的信息。

Image Courtesy: Udacity圖片提供:Udacity

In a pin hole camera, 2D image is formed when the light from 3D objects in the real world is focused through the lens on the screen. The image formed is reversed as shown in the figure. The image then needs to be converted using the Camera Matrix.

在針Kong照相機中,當(dāng)來自現(xiàn)實世界中3D對象的光通過屏幕上的鏡頭聚焦時,就會形成2D圖像。 如圖所示,形成的圖像反轉(zhuǎn)。 然后需要使用Camera Matrix轉(zhuǎn)換圖像。

Image Courtesy: Udacity圖片提供:Udacity

However most of the cameras don’t just use a pinhole. They use lenses which causes distortion.

但是,大多數(shù)相機不僅僅使用針Kong。 他們使用會導(dǎo)致變形的鏡頭。

Types of Distortion

失真類型

Radial Distortion

徑向變形

Real cameras use curved lenses to form an image, and light rays often bend a little too much or too little at the edges of these lenses. This creates an effect that distorts the edges of images, so that lines or objects appear more or less curved than they actually are. This is called radial distortion, and it’s the most common type of distortion.

實際的相機使用彎曲的鏡頭來形成圖像,并且光線通常在這些鏡頭的邊緣彎曲得太多或太少。 這會產(chǎn)生扭曲圖像邊緣的效果,從而使線條或?qū)ο罂雌饋肀葘嶋H彎曲的程度或大或小。 這稱為徑向變形 ,這是最常見的變形類型。

Image Courtesy : OpenCV圖片提供:OpenCV

Another type of distortion, is tangential distortion. This occurs when a camera’s lens is not aligned perfectly parallel to the imaging plane, where the camera film or sensor is. This makes an image look tilted so that some objects appear farther away or closer than they actually are.

另一類失真是切向失真 。 當(dāng)相機鏡頭未完全平行于相機膠卷或傳感器所在的成像平面對齊時,就會發(fā)生這種情況。 這會使圖像看起來傾斜,從而使某些對象看起來比實際位置更遠(yuǎn)或更近。

Courtesy: Udacity禮貌:Udacity

Distortion Coefficients and Correction

失真系數(shù)和校正

The first step for distortion correction is finding the Distortion coefficients. There are three coefficients needed to correct for radial distortion: k1, k2, and k3 and two for radial distortion: p1 and p2.

失真校正的第一步是找到失真系數(shù)。 校正徑向失真需要三個系數(shù):k1,k2和k3,而對于徑向失真則需要兩個系數(shù):p1和p2。

To correct the appearance of radially distorted points in an image, one can use a correction formula:

要校正圖像中徑向變形點的外觀,可以使用一種校正公式:

where (x,y) is a point in a distorted image. k1, k2, and k3 — Radial distortion coefficients of the lens. r2: x2 + y2.

其中( x , y )是變形圖像中的一個點。 k1,k2和k3-鏡頭的徑向畸變系數(shù)。 r 2: x 2 + y 2。

To undistort these points, OpenCV calculates r, which is the known distance between a point in an undistorted (corrected) image and the center of the image distortion, which is often the center of that image is sometimes referred to as the distortion center.

為了使這些點不失真,OpenCV計算rr是未失真(校正)圖像中的點與圖像失真中心之間的已知距離,通常該圖像的中心有時稱為失真中心。

Similarly the tangential distortion correction can be applied as :

類似地,切向失真校正可以應(yīng)用為:

The corrected coordinates x and y are then converted to normalised image coordinates. Normalised image coordinates are calculated from pixel coordinates by translating to the optical center and dividing by the focal length in pixels.

然后將校正后的坐標(biāo)x和y轉(zhuǎn)換為歸一化圖像坐標(biāo)。 通過平移到光學(xué)中心并除以像素的焦距,可以從像素坐標(biāo)中計算出歸一化的圖像坐標(biāo)。

where fx, fy are camera focal lengths and cx, cy are optical centers.

其中fx,fy是相機焦距,cx,cy是光學(xué)中心。

The distortion coefficient k3 is required to accurately reflect major radial distortion (like in wide angle lenses). However, for minor radial distortion, which most regular camera lenses have, k3 has a value close to or equal to zero and is negligible. So, in OpenCV, you can choose to ignore this coefficient; this is why it appears at the end of the distortion values array: [k1, k2, p1, p2, k3].

需要畸變系數(shù)k3才能準(zhǔn)確反映主要的徑向畸變(例如在廣角鏡中)。 但是,對于大多數(shù)常規(guī)攝像機鏡頭所具有的較小徑向變形,k3的值接近或等于零,可以忽略不計。 因此,在OpenCV中,您可以選擇忽略該系數(shù)。 這就是為什么它出現(xiàn)在失真值數(shù)組的末尾:[k1,k2,p1,p2,k3]。

方法 (Methodology)

For distortion correction, the most common way is to use the check board images. The process involves mapping distorted points to undistorted points in order to check for the amount of distortion. The chessboard is a great place to start as it has multiple checkpoints(corners) that can be used to identify distortion at various locations in the image. Its better to do this for multiple images in order to get the full gauge. The general recommendation is to use >20 images.

對于失真校正,最常見的方法是使用檢查板圖像。 該過程涉及將變形點映射到非變形點,以檢查變形量。 棋盤是一個很好的起點,因為它有多個檢查點(角落),可用于識別圖像中各個位置的變形。 最好對多個圖像執(zhí)行此操作以獲得完整的規(guī)格。 一般建議使用> 20張圖像。

1 - Get a chessboard and click pictures (>20) from different angles to have a starting. set. The basic idea is to find the corners in the distorted chess board images and map them to undistorted corners in the real world.

1-獲取棋盤,然后從不同角度單擊圖片(> 20)以開始。 組。 基本思想是在變形的棋盤圖像中找到拐角并將其映射到現(xiàn)實世界中未變形的拐角。

2- Start by preparing the obj points which are undistorted coordinates of the chessboard corners in the world. Assuming the chessboard is fixed on the (x, y) plane at z=0, such that the object points are the same for each calibration image. Thus, objp is just a replicated array of coordinates, and objpoints will be appended with a copy of it every time.

2-首先準(zhǔn)備obj點,它們是世界上棋盤角的不變形坐標(biāo)。 假設(shè)棋盤固定在z = 0的(x,y)平面上,以使每個校準(zhǔn)圖像的目標(biāo)點都相同。 因此,objp只是坐標(biāo)的復(fù)制數(shù)組,而objpoints每次都會附加一個副本。

3- All chessboard corners in a test image are detected using the OpenCV functions findChessboardCorners() and drawChessboardCorners()

3-使用OpenCV函數(shù)findChessboardCorners()和drawChessboardCorners()檢測測試圖像中的所有棋盤角

Output from findChessboardCorners() and drawChessboardCorners()findChessboardCorners()和drawChessboardCorners()的輸出

4- Imgpoints which are the corners of distorted image in 2D world are appended with the (x, y) pixel position of each of the corners in the image plane with each successful chessboard detection.

每次成功完成棋盤檢測后,在2D世界中作為扭曲圖像角的4個Imgpoints會附加在圖像平面中每個角的(x,y)像素位置。

5- Output objpoints and imgpoints are used to compute the camera calibration and distortion coefficients using the cv2.calibrateCamera() function.

5-輸出的objpoint和imgpoint用于使用cv2.calibrateCamera()函數(shù)來計算相機校準(zhǔn)和失真系數(shù)。

6- This distortion correction is applied to the test image using the cv2.undistort() function and obtained this result:

6-使用cv2.undistort()函數(shù)將此失真校正應(yīng)用于測試圖像,并獲得以下結(jié)果:

After that, these distortion coefficients are written to wide_dist_pickle.p to be used later for distortion correction of camera

之后,將這些失真系數(shù)寫入 wide_dist_pickle.p稍后將用于相機的失真校正

Functions for distortion correction

失真校正功能

Finding chessboard corners (for an 8x6 board):

查找棋盤角(用于8x6棋盤):

ret, corners = cv2.findChessboardCorners(gray, (8,6), None)

Drawing detected corners on an image:

在圖像上繪制檢測到的角:

img = cv2.drawChessboardCorners(img, (8,6), corners, ret)

Camera calibration, given object points, image points, and the shape of the grayscale image:

相機校準(zhǔn),給定的對象點,圖像點和灰度圖像的形狀:

ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)

Undistorting a test image:

使測試圖像不失真:

dst = cv2.undistort(img, mtx, dist, None, mtx)

The shape of the image, which is passed into the calibrateCamera function, is just the height and width of the image. One way to retrieve these values is by retrieving them from the grayscale image shape array gray.shape[::-1].

傳遞到calibrateCamera函數(shù)中的圖像形狀就是圖像的高度和寬度。 檢索這些值的一種方法是通過從灰度圖像形狀數(shù)組gray.shape [::-1]中檢索它們。

Another way to retrieve the image shape, is to get them directly from the color image by retrieving the first two values in the color image shape array using img.shape[1::-1]. Greyscale images on the other hand only have 2 dimensions (color images have three, height, width, and depth).

檢索圖像形狀的另一種方法是,通過使用img.shape [1 ::-1]檢索彩色圖像形狀數(shù)組中的前兩個值,直接從彩色圖像中獲取它們。 另一方面,灰度圖像只有2個維度(彩色圖像有3個,高度,寬度和深度)。

Full Code

完整代碼

import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
import pickle# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9, 0:6].T.reshape(-1,2)#print (objp )## Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.# Make a list of calibration images
images = glob.glob('/Users/architrastogi/Documents/camera_cal/calibration*.jpg')# Step through the list and search for chessboard corners
for idx, fname in enumerate(images):
img = cv2.imread (fname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (9,6), None)# If found, add object points, image points
if ret == True:
objpoints.append(objp)
imgpoints.append(corners)
cv2.startWindowThread()
# Draw and display the corners
cv2.drawChessboardCorners(img, (9,6), corners, ret)
write_name = 'corners_found'+str(idx)+'.jpg'
print(write_name)# Test undistortion on an image
img = cv2.imread('/Users/architrastogi/Documents/camera_cal/calibration2.jpg')
img_size = (img.shape[1], img.shape[0])# Do camera calibration given object points and image points
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img_size,None,None)# Applying the undistort to an image based on the calibration
dst = cv2.undistort(img, mtx, dist, None, mtx)# writing out the image
cv2.imwrite('/Users/architrastogi/Documents/output_images/test_undist.jpg',dst)# Save the camera calibration result for later use (we won't worry about rvecs / tvecs)
dist_pickle = {}
dist_pickle["mtx"] = mtx
dist_pickle["dist"] = dist
pickle.dump( dist_pickle, open( "/Users/architrastogi/Documents/camera_cal/wide_dist_pickle.p", "wb" ) )
#dst = cv2.cvtColor(dst, cv2.COLOR_BGR2RGB)
# Visualize undistortion
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.imshow(img)
ax1.set_title('Original Image', fontsize=30)
ax2.imshow(dst)
ax2.set_title('Undistorted Image', fontsize=30)

Affine Transformations

仿射變換

Once the distortion correction is done, the images can be used for Computer vision work. Geometric transformations can be applied to them for various purposes. The most common are affine transformations. Affine transformation are those that can be expressed in the form of a matrix multiplication (linear transformation) followed by a vector addition (translation).The reasons you might want to apply transformations include:

失真校正完成后,圖像可用于計算機視覺工作。 可以將幾何變換應(yīng)用于各種目的。 最常見的是仿射變換。 仿射變換是可以用矩陣乘法(線性變換)然后向量相加(平移)的形式表示的。您可能要應(yīng)用變換的原因包括:

  • To enhance the dataset- Sometimes Machine/Deep learning algorithms need a bigger dataset than is available. In those cases one can augment the dataset by applying these transformations to the images.

    為了增強數(shù)據(jù)集 -有時機器/深度學(xué)習(xí)算法需要比可用數(shù)據(jù)更大的數(shù)據(jù)集。 在那些情況下,可以通過將這些變換應(yīng)用于圖像來擴充數(shù)據(jù)集。

  • To extract some particular information: You might only be interested in rotated figures etc. or need to have bird’s eye view for your algorithm.

    提取一些特定信息:您可能只對旋轉(zhuǎn)的圖形等感興趣,或者需要對算法有鳥瞰圖。

Different Affine Transformation and Implementation

不同的仿射轉(zhuǎn)換和實現(xiàn)

OpenCV provides two transformation functions, cv2.warpAffine and cv2.warpPerspective

OpenCV提供了兩個轉(zhuǎn)換函數(shù)cv2.warpAffinecv2.warpPerspective

Scaling

縮放比例

Scaling is a linear transformation that enlarges or shrinks objects by a scale factor that is the same in all directions. Scaling is just resizing of the image. OpenCV comes with a function cv2.resize() for this purpose.

縮放是一種線性變換,可以通過在所有方向上相同的縮放因子來放大或縮小對象。 縮放只是調(diào)整圖像的大小。 為此,OpenCV帶有一個函數(shù)cv2.resize()

Translation

翻譯

A translation is a function that moves every point with a constant distance in a specified direction. Mathematically, the transformation matrix M can be represented as

平移是一種功能,可以使每個點在指定方向上以恒定距離移動。 從數(shù)學(xué)上講,變換矩陣M可以表示為

Where tx and ty are the translation in x and y.

其中tx和ty是x和y的轉(zhuǎn)換。

If the original picture is like

如果原始圖片像

Original Image原始圖片 Grayscaled Translated Image灰度翻譯圖像

A sample code to achieve this could look like :

實現(xiàn)此目的的示例代碼如下所示:

img = cv2.imread('/Users/architrastogi/Documents/blog/michigan.jpeg',0)
rows,cols = img.shapeM = np.float32([[1,0,100],[0,1,50]])
dst = cv2.warpAffine(img,M,(cols,rows))
cv2.imwrite('/Users/architrastogi/Documents/blog/michigan_trans.jpeg', dst)
cv2.imshow('img',dst)
cv2.waitKey(0)
cv2.destroyAllWindows()

Rotation

回轉(zhuǎn)

Rotation is a circular transformation around a point or an axis. We can specify the angle of rotation to rotate our image around a point or an axis.

旋轉(zhuǎn)是圍繞點或軸的圓形變換。 我們可以指定旋轉(zhuǎn)角度以圍繞點或軸旋轉(zhuǎn)圖像。

Rotation transformation matrix can be defined as

旋轉(zhuǎn)變換矩陣可以定義為

where theta is the angle of rotation

θ是旋轉(zhuǎn)角度

Grayscaled Rotated Image to 90 deg灰度旋轉(zhuǎn)圖像到90度

The sample code to achieve this :

實現(xiàn)此目的的示例代碼:

img = cv2.imread('/Users/architrastogi/Documents/blog/michigan.jpeg',0)
rows,cols = img.shapeM = cv2.getRotationMatrix2D((cols/2,rows/2),90,1)
dst = cv2.warpAffine(img,M,(cols,rows))
cv2.imwrite('/Users/architrastogi/Documents/blog/michigan_rot.jpeg', dst)
cv2.imshow('img',dst)
cv2.waitKey(0)
cv2.destroyAllWindows()

Perspective transform

透視變換

A perspective transform maps the points in a given image to different, desired, image points with a new perspective. One of the most common use of perspective transform is to convert to bird’s eye view.

透視變換將給定圖像中的點映射到具有新透視圖的不同的所需圖像點。 透視變換的最常見用途之一是將其轉(zhuǎn)換為鳥瞰圖。

Courtesy: Udacity禮貌:Udacity

Aside from creating a bird’s eye view representation of an image, a perspective transform can also be used for all kinds of different view points.

除了創(chuàng)建圖像的鳥瞰圖表示之外,透視變換還可以用于各種不同的視點。

Courtesy: Udacity禮貌:Udacity

The difference between camera calibration and perspective transform is that in perspective transform you are mapping image points to different image points while in calibration, you map object points to image points. OpenCV provides tailored functions for perspective transform.

相機校準(zhǔn)和透視變換之間的區(qū)別在于,在透視變換中,您將圖像點映射到不同的圖像點,而在校準(zhǔn)中,將對象點映射到圖像點。 OpenCV提供用于透視轉(zhuǎn)換的量身定制的功能。

Compute the perspective transform, M, given source and destination points:

給定源點和目標(biāo)點,計算透視變換M:

M = cv2.getPerspectiveTransform(src, dst)

Compute the inverse perspective transform:

計算逆透視變換:

Minv = cv2.getPerspectiveTransform(dst, src)

Warp an image using the perspective transform, M:

使用透視變換M扭曲圖像:

warped = cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_LINEAR)

You can either detect the source points manually or using specific programs.

您可以手動或使用特定程序檢測源點。

Thats all for now.

目前為止就這樣了。

Written while listening to Father John Misty

在聽約翰·米斯蒂神父時寫的

翻譯自: https://medium.com/swlh/i-see-you-computer-vision-fundamentals-64cc662d0b05

計算機視覺知識基礎(chǔ)

總結(jié)

以上是生活随笔為你收集整理的计算机视觉知识基础_我见你:计算机视觉基础知识的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。