日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Fine-tunning适用情况

發布時間:2023/12/4 编程问答 32 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Fine-tunning适用情况 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

https://www.quora.com/How-do-I-fine-tune-a-Caffe-pre-trained-model-to-do-image-classification-on-my-own-dataset

基本概念及舉例:

Deep Net or CNN like alexnet, Vggnet or googlenet are trained to classify images into different categories. Before the recent trend of Deep net or CNN, the typical method for classification is to extract the features from the images and use them to classify images by training with a svm. In this procedure the features were already computed using a method like hog or bag of words. But in deep net the features and the weights for classifying into different classes are all learned from end to end. You don't need to extract features from different method.

So when you train an alexnet, it learns the features representation as well as the weights for classifying the image. You just need to input the image and you will get the class which is assigned.

The notion is the features which are learned for classifying the 1000 object categories would be sufficient enough to classify your another different set of object categories.

This may not be the case every time. It depends on the dataset and type of images and classification task.

If all of them look similar then you can use that feature representation part of deepnet???? instead of again learning it. This is the idea of Finetuning.

So what you do is copy the layers of feature representation as it is from the network you already learned and you just learn the new? weights? required to classify that features into new categories your dataset has.

Implementation Level details in Caffe:

I assume here that you know how to create lmdb files for the new dataset you had.

if you want to finetune alexnet you need to copy the first 7 layers out of 8 as it is and need to change the last layer i.e fc_8. fully connected layer.

the changes you need to do is in train_val.prototxt .

Take the train_val.prototxt file in alexnet and just change the last layer fc8 to fc8_tune.

You should change the name of this layer. or else it copies the same weights before. Be careful about it.

And you need to change the?train.sh?file to load the weight of alexnet.

$TOOLS/caffe train --solver=quick_solver.prototxt --weights=bvlc_googlenet.caffemodel? 2> train.log.

Here the finetuning is done on googlenet. You can change accordingly.

And need to update the number of classes in the fc8 layer. ( number of outputs).

If you have any more questions please ask.


步驟:

1) prepare the input data
2) change data of input layer and fc8 layer in imagenet_train.prototxt and imagenet_val.prototxt
3) type finetune_net imagenet_solver.prototxt caffe_reference_imagenet_model in terminal?


Question:?

Check failed: error == cudaSuccess (2 vs. 0) out of memory----

The message is clear you are out of memory in your GPU card. So you willneed to reduce the batch_size


總結

以上是生活随笔為你收集整理的Fine-tunning适用情况的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。