日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

从头学习计算机网络_如何从头开始构建三层神经网络

發(fā)布時(shí)間:2023/11/29 编程问答 40 豆豆
生活随笔 收集整理的這篇文章主要介紹了 从头学习计算机网络_如何从头开始构建三层神经网络 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

從頭學(xué)習(xí)計(jì)算機(jī)網(wǎng)絡(luò)

by Daphne Cornelisse

達(dá)芙妮·康妮莉絲(Daphne Cornelisse)

如何從頭開(kāi)始構(gòu)建三層神經(jīng)網(wǎng)絡(luò) (How to build a three-layer neural network from scratch)

In this post, I will go through the steps required for building a three layer neural network. I’ll go through a problem and explain you the process along with the most important concepts along the way.

在這篇文章中,我將通過(guò)建立一個(gè)三層神經(jīng)網(wǎng)絡(luò)所需的步驟 我將解決一個(gè)問(wèn)題,并向您解釋整個(gè)過(guò)程以及最重要的概念。

要解決的問(wèn)題 (The problem to solve)

A farmer in Italy was having a problem with his labelling machine: it mixed up the labels of three wine cultivars. Now he has 178 bottles left, and nobody knows which cultivar made them! To help this poor man, we will build a classifier that recognizes the wine based on 13 attributes of the wine.

意大利的一位農(nóng)民的貼標(biāo)機(jī)出現(xiàn)了問(wèn)題:它混淆了三個(gè)葡萄酒品種的標(biāo)簽。 現(xiàn)在他只剩下178瓶了,沒(méi)人知道是哪個(gè)品種栽培的! 為了幫助這個(gè)可憐的人,我們將建立一個(gè)分類器 ,該分類器基于葡萄酒的13個(gè)屬性來(lái)識(shí)別葡萄酒。

The fact that our data is labeled (with one of the three cultivar’s labels) makes this a Supervised learning problem. Essentially, what we want to do is use our input data (the 178 unclassified wine bottles), put it through our neural network, and then get the right label for each wine cultivar as the output.

我們的數(shù)據(jù)被標(biāo)記(帶有三個(gè)品種的標(biāo)簽之一)的事實(shí)使這成為監(jiān)督學(xué)習(xí)問(wèn)題。 本質(zhì)上,我們想要做的是使用我們的輸入數(shù)據(jù)(178個(gè)未分類的葡萄酒瓶),通過(guò)我們的神經(jīng)網(wǎng)絡(luò)對(duì)其進(jìn)行輸入 ,然后為每個(gè)葡萄酒品種獲得正確的標(biāo)簽作為輸出。

We will train our algorithm to get better and better at predicting (y-hat) which bottle belongs to which label.

我們將訓(xùn)練我們的算法,使其越來(lái)越好地預(yù)測(cè)(y-hat)哪個(gè)瓶子屬于哪個(gè)標(biāo)簽。

Now it is time to start building the neural network!

現(xiàn)在是時(shí)候開(kāi)始構(gòu)建神經(jīng)網(wǎng)絡(luò)了!

方法 (Approach)

Building a neural network is almost like building a very complicated function, or putting together a very difficult recipe. In the beginning, the ingredients or steps you will have to take can seem overwhelming. But if you break everything down and do it step by step, you will be fine.

建立一個(gè)神經(jīng)網(wǎng)絡(luò)幾乎就像建立一個(gè)非常復(fù)雜的函數(shù),或者將一個(gè)非常困難的配方組合在一起。 在開(kāi)始時(shí),您將要采取的成分或步驟似乎不堪重負(fù)。 但是,如果您將所有內(nèi)容分解并逐步進(jìn)行,則可以。

In short:

簡(jiǎn)而言之:

  • The input layer (x) consists of 178 neurons.

    輸入層(x)由178個(gè)神經(jīng)元組成。
  • A1, the first layer, consists of 8 neurons.

    第一層A1由8個(gè)神經(jīng)元組成。
  • A2, the second layer, consists of 5 neurons.

    第二層A2由5個(gè)神經(jīng)元組成。
  • A3, the third and output layer, consists of 3 neurons.

    A3,即第三層和輸出層,由3個(gè)神經(jīng)元組成。

步驟1:通常的準(zhǔn)備 (Step 1: the usual prep)

Import all necessary libraries (NumPy, skicit-learn, pandas) and the dataset, and define x and y.

導(dǎo)入所有必要的庫(kù)(NumPy,skicit-learn,pandas)和數(shù)據(jù)集,并定義x和y。

#importing all the libraries and datasetimport pandas as pdimport numpy as npdf = pd.read_csv('../input/W1data.csv')df.head()# Package imports# Matplotlib import matplotlibimport matplotlib.pyplot as plt# SciKitLearn is a machine learning utilities libraryimport sklearn# The sklearn dataset module helps generating datasetsimport sklearn.datasetsimport sklearn.linear_modelfrom sklearn.preprocessing import OneHotEncoderfrom sklearn.metrics import accuracy_score

步驟2:初始化 (Step 2: initialization)

Before we can use our weights, we have to initialize them. Because we don’t have values to use for the weights yet, we use random values between 0 and 1.

在使用權(quán)重之前,我們必須對(duì)其進(jìn)行初始化。 由于我們尚無(wú)權(quán)重值,因此我們使用0到1之間的隨機(jī)值。

In Python, the random.seed function generates “random numbers.” However, random numbers are not truly random. The numbers generated are pseudorandom, meaning the numbers are generated by a complicated formula that makes it look random. In order to generate numbers, the formula takes the previous value generated as its input. If there is no previous value generated, it often takes the time as a first value.

在Python中, random.seed函數(shù)生成“隨機(jī)數(shù)”。 但是,隨機(jī)數(shù)并不是真正的隨機(jī)數(shù)。 生成的數(shù)字是偽隨機(jī)數(shù) ,這意味著這些數(shù)字是由復(fù)雜的公式生成的,該公式使它看起來(lái)是隨機(jī)的。 為了生成數(shù)字,該公式將先前生成的值作為其輸入。 如果沒(méi)有以前的值生成,則通常將時(shí)間作為第一個(gè)值。

That is why we seed the generator — to make sure that we always get the same random numbers. We provide a fixed value that the number generator can start with, which is zero in this case.

這就是我們?yōu)樯善魈峁┓N子的原因-確保我們始終獲得相同的隨機(jī)數(shù) 。 我們提供一個(gè)固定的值,數(shù)字生成器可以從該值開(kāi)始,在這種情況下為零。

np.random.seed(0)

步驟3:向前傳播 (Step 3: forward propagation)

There are roughly two parts of training a neural network. First, you are propagating forward through the NN. That is, you are “making steps” forward and comparing those results with the real values to get the difference between your output and what it should be. You basically see how the NN is doing and find the errors.

訓(xùn)練神經(jīng)網(wǎng)絡(luò)大約有兩個(gè)部分。 首先,您正在通過(guò)NN向前傳播。 也就是說(shuō),您正在“逐步”并將這些結(jié)果與實(shí)際值進(jìn)行比較,以獲取輸出與實(shí)際值之間的差異。 您基本上可以看到NN的運(yùn)行情況并找到錯(cuò)誤。

After we have initialized the weights with a pseudo-random number, we take a linear step forward. We calculate this by taking our input A0 times the dot product of the random initialized weights plus a bias. We started with a bias of 0. This is represented as:

用偽隨機(jī)數(shù)初始化權(quán)重后,我們向前線性邁進(jìn)了一步。 我們通過(guò)將輸入值A(chǔ)0乘以隨機(jī)初始化權(quán)重的點(diǎn)積偏差來(lái)計(jì)算 。 我們以0的偏差開(kāi)始。表示為:

Now we take our z1 (our linear step) and pass it through our first activation function. Activation functions are very important in neural networks. Essentially, they convert an input signal to an output signal — this is why they are also known as Transfer functions. They introduce non-linear properties to our functions by converting the linear input to a non-linear output, making it possible to represent more complex functions.

現(xiàn)在我們采取z1(線性步長(zhǎng))并將其傳遞給第一個(gè)激活函數(shù) 。 激活函數(shù)在神經(jīng)網(wǎng)絡(luò)中非常重要。 本質(zhì)上,它們將輸入信號(hào)轉(zhuǎn)換為輸出信號(hào)-這就是為什么它們也被稱為傳遞函數(shù)的原因。 通過(guò)將線性輸入轉(zhuǎn)換為非線性輸出,它們將非線性屬性引入了我們的函數(shù),從而可以表示更復(fù)雜的函數(shù)。

There are different kinds of activation functions (explained in depth in this article). For this model, we chose to use the tanh activation function for our two hidden layers — A1 and A2 — which gives us an output value between -1 and 1.

有各種不同的激活功能(在深度在解釋這個(gè)文章)。 對(duì)于此模型,我們選擇對(duì)兩個(gè)隱藏層A1和A2使用tanh激活函數(shù),從而為我們提供介于-1和1之間的輸出值。

Since this is a multi-class classification problem (we have 3 output labels), we will use the softmax function for the output layer — A3 — because this will compute the probabilities for the classes by spitting out a value between 0 and 1.

由于這是一個(gè)多類分類問(wèn)題 (我們有3個(gè)輸出標(biāo)簽),因此我們將softmax函數(shù)用于輸出層A3,因?yàn)檫@將通過(guò)吐出0到1之間的值來(lái)計(jì)算類的概率。

By passing z1 through the activation function, we have created our first hidden layer — A1 — which can be used as input for the computation of the next linear step, z2.

通過(guò)將z1傳遞到激活函數(shù),我們創(chuàng)建了我們的第一個(gè)隱藏層A1,可用作下一個(gè)線性步驟z2的計(jì)算輸入。

In Python, this process looks like this:

在Python中,此過(guò)程如下所示:

# This is the forward propagation functiondef forward_prop(model,a0): # Load parameters from model W1, b1, W2, b2, W3, b3 = model['W1'], model['b1'], model['W2'], model['b2'], model['W3'],model['b3'] # Do the first Linear step z1 = a0.dot(W1) + b1 # Put it through the first activation function a1 = np.tanh(z1) # Second linear step z2 = a1.dot(W2) + b2 # Put through second activation function a2 = np.tanh(z2) #Third linear step z3 = a2.dot(W3) + b3 #For the Third linear activation function we use the softmax function a3 = softmax(z3) #Store all results in these values cache = {'a0':a0,'z1':z1,'a1':a1,'z2':z2,'a2':a2,'a3':a3,'z3':z3} return cache

In the end, all our values are stored in the cache.

最后,我們所有的值都存儲(chǔ)在cache中 。

步驟4:向后傳播 (Step 4: backwards propagation)

After we forward propagate through our NN, we backward propagate our error gradient to update our weight parameters. We know our error, and want to minimize it as much as possible.

在通過(guò)NN向前傳播之后,我們向后傳播誤差梯度以更新權(quán)重參數(shù)。 我們知道我們的錯(cuò)誤,并希望將其最小化。

We do this by taking the derivative of the error function, with respect to the weights (W) of our NN, using gradient descent.

我們通過(guò)使用梯度下降相對(duì)于我們的NN的權(quán)重(W)取誤差函數(shù)導(dǎo)數(shù)來(lái)實(shí)現(xiàn)

Lets visualize this process with an analogy.

讓我們以類比的方式可視化此過(guò)程。

Imagine you went out for a walk in the mountains during the afternoon. But now its an hour later and you are a bit hungry, so it’s time to go home. The only problem is that it is dark and there are many trees, so you can’t see either your home or where you are. Oh, and you forgot your phone at home.

想象一下,您下午在山上散步。 但是現(xiàn)在一個(gè)小時(shí)后,您有點(diǎn)餓了,是時(shí)候回家了。 唯一的問(wèn)題是它很暗,有很多樹(shù),所以您看不到自己的房屋或所在的位置。 哦,您忘記了家里的電話。

But then you remember your house is in a valley, the lowest point in the whole area. So if you just walk down the mountain step by step until you don’t feel any slope, in theory you should arrive at your home.

但是,您還記得自己的房子在山谷中,這是整個(gè)地區(qū)的最低點(diǎn)。 因此,如果您只是一步一步走下山直到感覺(jué)不到任何坡度,理論上您應(yīng)該回到家中。

So there you go, step by step carefully going down. Now think of the mountain as the loss function, and you are the algorithm, trying to find your home (i.e. the lowest point). Every time you take a step downwards, we update your location coordinates (the algorithm updates the parameters).

因此,您可以逐步仔細(xì)地進(jìn)行下去。 現(xiàn)在將山視為損失函數(shù),您就是算法,試圖找到您的房屋(即最低點(diǎn) )。 每次您向下移動(dòng)時(shí),我們都會(huì)更新您的位置坐標(biāo)(算法會(huì)更新參數(shù) )。

The loss function is represented by the mountain. To get to a low loss, the algorithm follows the slope — that is the derivative — of the loss function.

損失函數(shù)由山表示。 為了達(dá)到低損耗,該算法遵循損耗函數(shù)的斜率(即導(dǎo)數(shù))。

When we walk down the mountain, we are updating our location coordinates. The algorithm updates the parameters of the neural network. By getting closer to the minimum point, we are approaching our goal of minimizing our error.

當(dāng)我們走下山時(shí),我們正在更新位置坐標(biāo)。 該算法更新神經(jīng)網(wǎng)絡(luò)的參數(shù)。 通過(guò)接近最低點(diǎn),我們正在實(shí)現(xiàn)將錯(cuò)誤最小化的目標(biāo)

In reality, gradient descent looks more like this:

實(shí)際上,梯度下降看起來(lái)更像這樣:

We always start with calculating the slope of the loss function with respect to z, the slope of the linear step we take.

我們總是從計(jì)算損耗函數(shù)相對(duì)于z的斜率開(kāi)始,z是我們采取的線性步長(zhǎng)的斜率。

Notation is as follows: dv is the derivative of the loss function, with respect to a variable v.

表示法如下:dv是損失函數(shù)相對(duì)于變量v的導(dǎo)數(shù)。

Next we calculate the slope of the loss function with respect to our weights and biases. Because this is a 3 layer NN, we will iterate this process for z3,2,1 + W3,2,1 and b3,2,1. Propagating backwards from the output to the input layer.

接下來(lái),我們計(jì)算損失函數(shù)相對(duì)于我們的權(quán)重和偏差的斜率 。 由于這是3層NN,因此我們將迭代z3,2,1 + W3,2,1和b3,2,1的過(guò)程。 從輸出向后傳播到輸入層。

This is how this process looks in Python:

這是此過(guò)程在Python中的外觀:

# This is the backward propagation functiondef backward_prop(model,cache,y):# Load parameters from model W1, b1, W2, b2, W3, b3 = model['W1'], model['b1'], model['W2'], model['b2'],model['W3'],model['b3'] # Load forward propagation results a0,a1, a2,a3 = cache['a0'],cache['a1'],cache['a2'],cache['a3'] # Get number of samples m = y.shape[0] # Calculate loss derivative with respect to output dz3 = loss_derivative(y=y,y_hat=a3)# Calculate loss derivative with respect to second layer weights dW3 = 1/m*(a2.T).dot(dz3) #dW2 = 1/m*(a1.T).dot(dz2) # Calculate loss derivative with respect to second layer bias db3 = 1/m*np.sum(dz3, axis=0) # Calculate loss derivative with respect to first layer dz2 = np.multiply(dz3.dot(W3.T) ,tanh_derivative(a2)) # Calculate loss derivative with respect to first layer weights dW2 = 1/m*np.dot(a1.T, dz2) # Calculate loss derivative with respect to first layer bias db2 = 1/m*np.sum(dz2, axis=0) dz1 = np.multiply(dz2.dot(W2.T),tanh_derivative(a1)) dW1 = 1/m*np.dot(a0.T,dz1) db1 = 1/m*np.sum(dz1,axis=0) # Store gradients grads = {'dW3':dW3, 'db3':db3, 'dW2':dW2,'db2':db2,'dW1':dW1,'db1':db1} return grads

步驟5:訓(xùn)練階段 (Step 5: the training phase)

In order to reach the optimal weights and biases that will give us the desired output (the three wine cultivars), we will have to train our neural network.

為了達(dá)到最佳權(quán)重和偏見(jiàn) ,這將為我們提供理想的產(chǎn)量(三個(gè)葡萄酒品種),我們將必須訓(xùn)練我們的神經(jīng)網(wǎng)絡(luò)。

I think this is very intuitive. For almost anything in life, you have to train and practice many times before you are good at it. Likewise, a neural network will have to undergo many epochs or iterations to give us an accurate prediction.

我認(rèn)為這是非常直觀的。 對(duì)于生活中幾乎所有事情,您都必須多次訓(xùn)練和練習(xí),然后再擅長(zhǎng)于此。 同樣,神經(jīng)網(wǎng)絡(luò)將必須經(jīng)歷許多時(shí)期或迭代才能為我們提供準(zhǔn)確的預(yù)測(cè)。

When you are learning anything, lets say you are reading a book, you have a certain pace. This pace should not be too slow, as reading the book will take ages. But it should not be too fast, either, since you might miss a very valuable lesson in the book.

當(dāng)您學(xué)習(xí)任何東西時(shí),可以說(shuō)您正在讀書(shū), 步伐一定。 這個(gè)步伐不應(yīng)該太慢,因?yàn)殚喿x本書(shū)會(huì)花費(fèi)很多時(shí)間。 但這也不應(yīng)該太快,因?yàn)槟赡軙?huì)錯(cuò)過(guò)本書(shū)中非常有價(jià)值的一課。

In the same way, you have to specify a “learning rate” for the model. The learning rate is the multiplier to update the parameters. It determines how rapidly they can change. If the learning rate is low, training will take longer. However, if the learning rate is too high, we might miss a minimum. The learning rate is expressed as:

同樣,您必須為模型指定一個(gè)“ 學(xué)習(xí)率 ”。 學(xué)習(xí)率是更新參數(shù)的乘數(shù)。 它決定了它們可以多快地改變。 如果學(xué)習(xí)率低,則培訓(xùn)將花費(fèi)更長(zhǎng)的時(shí)間。 但是,如果學(xué)習(xí)率太高,我們可能會(huì)錯(cuò)過(guò)最低水平。 學(xué)習(xí)率表示為:

  • := means that this is a definition, not an equation or proven statement.

    :=表示這是一個(gè)定義,而不是等式或經(jīng)過(guò)證明的陳述。

  • a is the learning rate called alpha

    一個(gè) 學(xué)習(xí)率稱為alpha

  • dL(w) is the derivative of the total loss with respect to our weight w

    dL(w)是相對(duì)于我們的體重w的總損失的導(dǎo)數(shù)

  • da is the derivative of alpha

    da是alpha的導(dǎo)數(shù)

We chose a learning rate of 0.07 after some experimenting.

經(jīng)過(guò)一些實(shí)驗(yàn),我們選擇了0.07的學(xué)習(xí)率。

# This is what we return at the endmodel = initialise_parameters(nn_input_dim=13, nn_hdim= 5, nn_output_dim= 3)model = train(model,X,y,learning_rate=0.07,epochs=4500,print_loss=True)plt.plot(losses)

Finally, there is our graph. You can plot your accuracy and/or loss to get a nice graph of your prediction accuracy. After 4,500 epochs, our algorithm has an accuracy of 99.4382022472 %.

最后,有我們的圖。 您可以繪制準(zhǔn)確度和/或損失的圖,以獲得預(yù)測(cè)準(zhǔn)確度的良好圖形。 在4,500個(gè)紀(jì)元后,我們的算法的準(zhǔn)確度為99.4382022472%。

簡(jiǎn)要總結(jié) (Brief summary)

We start by feeding data into the neural network and perform several matrix operations on this input data, layer by layer. For each of our three layers, we take the dot product of the input by the weights and add a bias. Next, we pass this output through an activation function of choice.

我們首先將數(shù)據(jù)饋入神經(jīng)網(wǎng)絡(luò),然后逐層對(duì)該輸入數(shù)據(jù)執(zhí)行幾個(gè)矩陣運(yùn)算。 對(duì)于我們的三層中的每一層,我們將輸入的點(diǎn)乘積乘以權(quán)重并添加一個(gè)偏差。 接下來(lái),我們通過(guò)選擇的激活函數(shù)傳遞此輸出。

The output of this activation function is then used as an input for the following layer to follow the same procedure. This process is iterated three times since we have three layers. Our final output is y-hat, which is the prediction on which wine belongs to which cultivar. This is the end of the forward propagation process.

然后,此激活功能的輸出將用作下一層的輸入,以遵循相同的過(guò)程。 由于我們分為三層,因此此過(guò)程重復(fù)了三遍。 我們的最終輸出是y-hat ,這是對(duì)哪種酒屬于哪個(gè)品種的預(yù)測(cè) 。 這是正向傳播過(guò)程的結(jié)束。

We then calculate the difference between our prediction (y-hat) and the expected output (y) and use this error value during backpropagation.

然后,我們計(jì)算預(yù)測(cè)值(y-hat)與預(yù)期輸出(y)之間的 ,并在反向傳播期間使用此誤差值。

During backpropagation, we take our error — the difference between our prediction y-hat and y — and we mathematically push it back through the NN in the other direction. We are learning from our mistakes.

在反向傳播期間,我們采用了誤差-預(yù)測(cè)的y-hat和y之間的差-,并且在數(shù)學(xué)上將其從另一個(gè)方向推回了NN。 我們正在從錯(cuò)誤中學(xué)習(xí)。

By taking the derivative of the functions we used during the first process, we try to discover what value we should give the weights in order to achieve the best possible prediction. Essentially we want to know what the relationship is between the value of our weight and the error that we get out as the result.

通過(guò)獲取在第一步中使用的函數(shù)的導(dǎo)數(shù),我們嘗試發(fā)現(xiàn)應(yīng)該賦予權(quán)重的值,以便獲得最佳的預(yù)測(cè) 。 本質(zhì)上,我們想知道體重值與結(jié)果錯(cuò)誤之間的關(guān)系。

And after many epochs or iterations, the NN has learned to give us more accurate predictions by adapting its parameters to our dataset.

在經(jīng)過(guò)許多時(shí)期或迭代之后,神經(jīng)網(wǎng)絡(luò)已學(xué)會(huì)通過(guò)將其參數(shù)調(diào)整為數(shù)據(jù)集來(lái)為我們提供更準(zhǔn)確的預(yù)測(cè)。

This post was inspired by the week 1 challenge from the Bletchley Machine Learning Bootcamp that started on the 7th of February. In the coming nine weeks, I’m one of 50 students who will go through the fundamentals of Machine Learning. Every week we discuss a different topic and have to submit a challenge, which requires you to really understand the materials.

這篇文章的靈感來(lái)自于2月7日開(kāi)始的Bletchley機(jī)器學(xué)習(xí)訓(xùn)練營(yíng)的第1周挑戰(zhàn)。 在接下來(lái)的九周中,我將成為50位將學(xué)習(xí)機(jī)器學(xué)習(xí)基礎(chǔ)知識(shí)的學(xué)生之一。 每周我們討論一個(gè)不同的主題,并且必須提交一個(gè)挑戰(zhàn),這要求您真正了解材料。

If you have any questions or suggestions or, let me know!

如果您有任何問(wèn)題或建議,或者讓我知道!

Or if you want to check out the whole code, you can find it here on Kaggle.

或者,如果你想看看整個(gè)代碼,你可以找到它在這里的Kaggle。

Recommended videos to get a deeper understanding on neural networks:

推薦的視頻可以使您對(duì)神經(jīng)網(wǎng)絡(luò)有更深入的了解:

  • 3Blue1Brown’s series on neural networks

    3Blue1Brown神經(jīng)網(wǎng)絡(luò)系列

  • Siraj Raval’s series on Deep Learning

    Siraj Raval的深度學(xué)習(xí)系列

翻譯自: https://www.freecodecamp.org/news/building-a-3-layer-neural-network-from-scratch-99239c4af5d3/

從頭學(xué)習(xí)計(jì)算機(jī)網(wǎng)絡(luò)

總結(jié)

以上是生活随笔為你收集整理的从头学习计算机网络_如何从头开始构建三层神经网络的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。