使用cnn预测房价_使用CNN的人和马预测
使用cnn預(yù)測房價(jià)
There are many transfer learning methods to solve classification problems with respect to the accuracy, i.e 1.VGG (e.g. VGG16 or VGG19) 2.GoogLeNet (e.g. InceptionV3).3.Residual Network (e.g. ResNet50).These are some pre-trained models, but here we are going to build an end to end model as per the convolutional neural networks architecture.
有很多轉(zhuǎn)移學(xué)習(xí)方法可以解決關(guān)于準(zhǔn)確性的分類問題,例如1.VGG(例如VGG16或VGG19)2.GoogLeNet(例如InceptionV3).3.Residual Network(例如ResNet50)。這些是一些預(yù)先訓(xùn)練的模型,但是在這里,我們將根據(jù)卷積神經(jīng)網(wǎng)絡(luò)架構(gòu)來構(gòu)建端到端模型。
卷積神經(jīng)網(wǎng)絡(luò) (Convolutional Neural Network)
There are many definitions to describe CNN but in simple terms, convolutional refers to the mathematical combination of two functions to produce a third function. It merges two sets of information. in the case of a CNN is used to analyze visual imaginary. They are also known as shift invariant or space invariant artificial neural networks, based on their shared-weights architecture and translation invariance characteristics.
描述CNN的定義很多,但簡單來說,卷積是指兩個(gè)函數(shù)的數(shù)學(xué)組合以產(chǎn)生第三個(gè)函數(shù)。 它合并了兩組信息。 在使用CNN的情況下,可以分析視覺虛像。 基于它們的共享權(quán)重架構(gòu)和平移不變性特征,它們也被稱為位移不變或空間不變的人工神經(jīng)網(wǎng)絡(luò)。
Source資源CNN中的圖層 (Layers in CNN)
There are three layers in CNN, convolutional layer, pooling layer, and fully connected layer. Each of these layers has different parameters that can be optimized and performs a different task on the input data.
CNN分為三層:卷積層 ,池化層和完全連接層 。 這些層中的每個(gè)層都有可以優(yōu)化的不同參數(shù),并且對輸入數(shù)據(jù)執(zhí)行不同的任務(wù)。
Source資源池化層 (Pooling Layer)
The pooling layer is a building block on CNN. its function progressively reduces the spatial size of the presentation to reduce the number of parameters and computation in network. The pooling layer operates on each feature map independently. max-pooling is the most common approach used in the convolutional neural networks.
池化層是CNN的基礎(chǔ)。 它的功能逐漸減小了演示文稿的空間大小,從而減少了網(wǎng)絡(luò)中的參數(shù)和計(jì)算量。 池化層在每個(gè)要素地圖上獨(dú)立運(yùn)行。 最大池化是卷積神經(jīng)網(wǎng)絡(luò)中最常用的方法。
數(shù)據(jù)集 (Dataset)
We have data for training:
我們有用于培訓(xùn)的數(shù)據(jù):
500 horse images and 527 humans (male & Female) images
500張馬圖像和527位人類(男性和女性)圖像
For Validation:
驗(yàn)證:
122 Horse Images and 123 Human(male & Female) images
122張馬圖像和123張人類(男性和女性)圖像
Dataset link
數(shù)據(jù)集鏈接
實(shí)作 (Implementation)
Importing necessary libraries and packages.
導(dǎo)入必要的庫和包。
import kerasfrom keras.preprocessing.image import ImageDataGeneratorimport matplotlib.pyplot as plt資料載入 (Data Load)
train_data = "train/"validation_data = "validation/"
Here we have train and validation datasets, we will use train data for training and validation data to prevent overfitting problem. also here we have a small size of the dataset as we before building the CNN model the data should have huge images to make the most accurate model. so we will generate more images using image augmentation.
這里有訓(xùn)練和驗(yàn)證數(shù)據(jù)集,我們將訓(xùn)練數(shù)據(jù)用于訓(xùn)練和驗(yàn)證數(shù)據(jù)以防止過擬合問題。 同樣,在這里,我們的數(shù)據(jù)集很小,因?yàn)樵跇?gòu)建CNN模型之前,數(shù)據(jù)應(yīng)該具有巨大的圖像以構(gòu)成最準(zhǔn)確的模型。 因此我們將使用圖像增強(qiáng)生成更多圖像。
數(shù)據(jù)預(yù)處理 (Data Preprocessing)
#For tariningtrain_data_gen = ImageDataGenerator(rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
Generating more training data for model training using image data generator parameters like rotation and width, height, zoom range.
使用圖像數(shù)據(jù)生成器參數(shù)(例如旋轉(zhuǎn)和寬度,高度,縮放范圍)為模型訓(xùn)練生成更多訓(xùn)練數(shù)據(jù)。
train_data1 = train_data_gen.flow_from_directory(train_data,target_size=(150,150),
batch_size=32,
class_mode='binary')
Let’s Check the image classes
讓我們檢查一下圖像類
train_data1.class_indicesThe classes are divided as {‘horses’: 0, ‘humans’: 1}.
這些類別分為{'horses':0,'humans':1}。
Now Generating for validation.
現(xiàn)在生成以進(jìn)行驗(yàn)證。
validation_data_gen = ImageDataGenerator(rescale=1./255)validation_data1 = validation_data_gen.flow_from_directory(validation_data,
target_size=(150,150),
batch_size=32,
class_mode='binary')
Now plotting the generated images.
現(xiàn)在繪制生成的圖像。
def plotImages(images_arr):fig, axes = plt.subplots(1, 5, figsize=(20, 20))
axes = axes.flatten()
for img, ax in zip(images_arr, axes):
ax.imshow(img)
plt.tight_layout()
plt.show()
Let’s plots some training image datasets.
讓我們繪制一些訓(xùn)練圖像數(shù)據(jù)集。
images = [train_data1[0][0][0] for i in range(5)]plotImages(images)Augmented Image增強(qiáng)圖像
建立CNN (Building CNN)
We are using Keras here so it makes code much simpler form because it uses TensorFlow API. so let’s start to build a model step by step.
我們在這里使用Keras,因?yàn)樗褂肨ensorFlow API,所以它使代碼的形式更簡單。 因此,讓我們開始逐步構(gòu)建模型。
cnn_model = keras.models.Sequential([keras.layers.Conv2D(filters=32, kernel_size=3, input_shape=[150,150,3]),
keras.layers.MaxPooling2D(pool_size=(2,3)),
keras.layers.Conv2D(filters=64, kernel_size=3),
keras.layers.MaxPooling2D(pool_size=(2,2)),
keras.layers.Conv2D(filters=128, kernel_size=3),
keras.layers.MaxPooling2D(pool_size=(2,2)),
keras.layers.Conv2D(filters=256, kernel_size=3),
keras.layers.MaxPooling2D(pool_size=(2,2)),
keras.layers.Dropout(0.5),
keras.layers.Flatten(), #after this we will go for neural network building
keras.layers.Dense(units=128, activation='relu'), #inputlayers
keras.layers.Dropout(0.1),
keras.layers.Dense(units=256, activation='relu'), #Hidden layer
keras.layers.Dropout(0.25),
keras.layers.Dense(units=2, activation='softmax') #output layer with 2 neurons
])
Basically we took 128 for input and 256 neurons for hidden, and we used relu and softmax activation functions, with respect to classes we used softmax activation function. Now let’s compile the model.
基本上,我們使用128個(gè)輸入作為輸入,使用256個(gè)神經(jīng)元進(jìn)行隱藏,并使用relu和softmax激活函數(shù),就使用softmax激活函數(shù)的類而言。 現(xiàn)在讓我們編譯模型。
from keras.optimizers import Adamfrom keras.callbacks import ModelCheckpointLet’s set the optimizer and loss function now.
現(xiàn)在設(shè)置優(yōu)化器和損失函數(shù)。
cnn_model.compile(optimizer=Adam(lr=0.0001), loss='sparse_categorical_crossentropy', metrics=['accuracy'])Before going to build a model we can set a path to save our model, so in my case, I have set my local systems path.
在構(gòu)建模型之前,我們可以設(shè)置保存模型的路徑,因此,就我而言,我已經(jīng)設(shè)置了本地系統(tǒng)路徑。
model_path='human_horse_predict.h5'checkpoint = ModelCheckpoint(model_path, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')callbacks_list = [checkpoint]
模型火車 (Model Train)
history = cnn_model.fit(train_data1,epochs=100,
verbose=1,
validation_data=validation_data1,
callbacks=callbacks_list)
I have trained this model by using Cuda and it took less time for training, better you can train using Google COLAB.
我已經(jīng)使用Cuda訓(xùn)練了該模型,并且花了更少的時(shí)間進(jìn)行訓(xùn)練,更好的是可以使用Google COLAB進(jìn)行訓(xùn)練。
Now the model is trained, let’s see the summary of the model.
現(xiàn)在已經(jīng)對模型進(jìn)行了訓(xùn)練,讓我們看看模型的摘要。
#Summarize history for accuracyplt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model Loss')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
Let’s look at our model accuracy on train and test data.
讓我們看看我們在訓(xùn)練和測試數(shù)據(jù)上的模型準(zhǔn)確性。
Plot情節(jié)You can see how our model is overfitted as per epoch. Let’s test the model.
您可以看到我們的模型在每個(gè)時(shí)期都是過擬合的。 讓我們測試一下模型。
模型測試 (Model Test)
model1 = keras.models.load_model(model_path)Give the saved model path.
給出保存的模型路徑。
Let’s take some human and horse images and assing in some variables.
讓我們拍攝一些人和馬的圖像,并評估一些變量。
##Horse imagesh1 = 'train/horses/horse01-1.png'
h2 = 'train/horses/horse01-2.png'
h3 = 'train/horses/horse01-5.png'#human images
hu1 = 'train/humans/human01-02.png'
hu2 = 'train/humans/human01-05.png'
hu3 = 'train/humans/human01-08.png
Creating one function which will prepress our image in an array format.
創(chuàng)建一個(gè)將以數(shù)組格式預(yù)印圖像的功能。
import numpy as npfrom keras.preprocessing import imageDefining function.
定義功能。
def pred_human_horse(model, horse_or_human):test_image = image.load_img(horse_or_human, target_size = (155,155))
test_image = image.img_to_array(test_image)/255
test_image = np.expand_dims(test_image, axis=0)
result = model.predict(test_image).round(3)
pred = np.argmax(result)
if pred==0:
print("Horse")
else:
print("Human")
預(yù)測 (Prediction)
For using three images we are going to predict.
對于使用三個(gè)圖像,我們將進(jìn)行預(yù)測。
for horse_or_human in [h1,h2,h3]:pred_human_horse(model1, horse_or_human)Horse
Horse
Horse
So, we got a good prediction here.
因此,我們在這里得到了很好的預(yù)測。
Hope you like this article, we can improve this model for using hyperparameter tuning and you can also try the pre-trained models likeVGG (e.g. VGG16 or VGG19).
希望您喜歡本文,我們可以改進(jìn)此模型以使用超參數(shù)調(diào)整,還可以嘗試使用預(yù)訓(xùn)練的模型,例如VGG(例如VGG16或VGG19)。
IPython notebook.
IPython 筆記本 。
Github Link for all machine learning and Deep Learning Resources, and you can also see my machine learning deployments using Flask API.
Github 鏈接提供了所有機(jī)器學(xué)習(xí)和深度學(xué)習(xí)資源,您還可以使用Flask API查看我的機(jī)器學(xué)習(xí)部署。
翻譯自: https://towardsdatascience.com/human-and-horse-prediction-using-cnn-563309f988ff
使用cnn預(yù)測房價(jià)
總結(jié)
以上是生活随笔為你收集整理的使用cnn预测房价_使用CNN的人和马预测的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 谷歌联合学习的论文_Google的未来联
- 下一篇: 4999元!机械革命极光Z开启定金预售