Getting started with caffe questions answers (摘选)
本文摘選了Getting started with caffe questions answers 部分內容,更多細節請下載pdf文件 getting-started-with-caffe-questions-answers.pdf
caffe 資料可在百度云盤下載
鏈接: http://pan.baidu.com/s/1jIRJ6mU
提取密碼:xehi
Q: Is there a minimum size of dataset needed in order to get a good speedup on Titan X GPU? In the past I have seen that the GPU pipelines need to be filled in order to get needed speedups
A: Good question. Generally the nature of DL requires an extensive training data set. That often implies you have ample work to keep one or more GPUs fully busy. The model parameters also impact the performance but for most cases you would likely see plenty of work to keep a Titan X
busy. The good news is that caffe, cuDNN, and DIGITS all do a great job of making sure you are getting the maximum value out of whatever GPU resources you have available. In summary, use a framework that uses cuDNN and you should be seeing very good speedups with a Titan X.
Q: is there any reason why one would work with theano over caffe?
A: The approach of both frameworks are very different. Caffe is a DL framework. Theano can be seen as a compiler. It will be application dependent.
Q: If there is a preexisting model that identifies giraffes and another that identifies horses, how do you know which ones to choose if you want to transfer knowledge to identify cats, for example?
A: You can train a model on one set of images, giraffes and horses in this example, then modify the final layers to include new categories and retrain
them on the new categories. This is called fine-tuning
Q: What is the advantage of using batch size >1?
A: Using a larger batch size allows the GPU to train on multiple images at a time, greatly boosting performance.
Q: Does Caffe support unsupervised deep learning?
A: Not at this time. You need to label all of your input categories.
Q: Any references to improve skills in finetuning a network?
A: I used this the Caffe example to help me get started when I was learning to use Caffe http://caffe.berkeleyvision.org/gathered/examples/finetune_flickr_style.html .
Q: How do we make sure that our batch size is appropriate relative to the capabilities of our GPU?
A: You can run nvidia-smi to check your GPU utilization. One thing you can do is increase or decrease batch size to maximize GPU utilization given the amount of memory on the board.
總結
以上是生活随笔為你收集整理的Getting started with caffe questions answers (摘选)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 北京银行信用卡还款日可以改吗?怎么查信用
- 下一篇: Photoshop 手动画金标准流程