我的NVIDIA开发者之旅——Caffe教程(3)使用sklearn和caffe进行简单逻辑回归实践
"我的NVIDIA開發者之旅” | 征文活動進行中.......
完成我的NVIDIA開發者之旅——Caffe教程(2)[Jetson TK1]Caffe工具環境(Linux)搭建實例-CSDN社區,搭建好Caffe環境后我們就可以開始我們的Caffe實踐啦。
不知道大家寫的第一個有關深度學習的代碼是什么,博主個人是學習吳恩達老師的DeepLearning入門的,也按照課后作業進行了練習,自己第一次動手的就是實現了一個簡單的線性回歸的實踐如下圖,到現在依然記憶猶新,哈哈。
接下來我們開始吧,雖然Caffe用于深層網絡,但它同樣可以表示“淺層”模型,如用于分類的邏輯回歸。我們將對合成數據進行簡單的邏輯回歸,我們將生成這些數據并保存到HDF5中,以向Caffe提供向量。完成該模型后,我們將添加層以提高精度。這就是Caffe的意義:定義一個模型,進行實驗,然后部署。
首先我們導入所需要的一些包等資源:
import numpy as np import matplotlib.pyplot as plt %matplotlib inlineimport os os.chdir('..')import sys sys.path.insert(0, './python') import caffeimport os import h5py import shutil import tempfileimport sklearn import sklearn.datasets import sklearn.linear_modelimport pandas as pd我們可以通過合成一個包含10000個4向量的數據集,用于具有2個信息特征和2個噪聲特征的二元分類:
X, y = sklearn.datasets.make_classification(n_samples=10000, n_features=4, n_redundant=0, n_informative=2, n_clusters_per_class=2, hypercube=False, random_state=0 ) print 'data,',X.shape,y.shape # (10000, 4) (10000,) x0,x1,x2,x3, y# Split into train and test X, Xt, y, yt = sklearn.model_selection.train_test_split(X, y) print 'train,',X.shape,y.shape #train: (7500, 4) (7500,) print 'test,', Xt.shape,yt.shape#test: (2500, 4) (2500,)# Visualize sample of the data ind = np.random.permutation(X.shape[0])[:1000] # (7500,)--->(1000,) x0,x1,x2,x3, y df = pd.DataFrame(X[ind]) _ = pd.plotting.scatter_matrix(df, figsize=(9, 9), diagonal='kde', marker='o', s=40, alpha=.4, c=y[ind])data, (10000, 4) (10000,)
train, (7500, 4) (7500,)
test, (2500, 4) (2500,)
使用隨機梯度下降(SGD)訓練學習和評估scikit Learn的logistic回歸。計時并檢查分類器的準確性:
%%timeit # Train and test the scikit-learn SGD logistic regression. clf = sklearn.linear_model.SGDClassifier(loss='log', n_iter=1000, penalty='l2', alpha=5e-4, class_weight='balanced')clf.fit(X, y) yt_pred = clf.predict(Xt) print('Accuracy: {:.3f}'.format(sklearn.metrics.accuracy_score(yt, yt_pred)))Accuracy: 0.781
Accuracy: 0.781
Accuracy: 0.781
Accuracy: 0.781
1 loop, best of 3: 372 ms per loop
然后再將數據集保存到HDF5以加載到Caffe中:
# Write out the data to HDF5 files in a temp directory. # This file is assumed to be caffe_root/examples/hdf5_classification.ipynb dirname = os.path.abspath('./examples/hdf5_classification/data') if not os.path.exists(dirname):os.makedirs(dirname)train_filename = os.path.join(dirname, 'train.h5') test_filename = os.path.join(dirname, 'test.h5')# HDF5DataLayer source should be a file containing a list of HDF5 filenames. # To show this off, we'll list the same data file twice. with h5py.File(train_filename, 'w') as f:f['data'] = Xf['label'] = y.astype(np.float32) with open(os.path.join(dirname, 'train.txt'), 'w') as f:f.write(train_filename + '\n')f.write(train_filename + '\n')# HDF5 is pretty efficient, but can be further compressed. comp_kwargs = {'compression': 'gzip', 'compression_opts': 1} with h5py.File(test_filename, 'w') as f:f.create_dataset('data', data=Xt, **comp_kwargs)f.create_dataset('label', data=yt.astype(np.float32), **comp_kwargs) with open(os.path.join(dirname, 'test.txt'), 'w') as f:f.write(test_filename + '\n')我們可以通過Python net規范在Caffe中定義邏輯回歸。這是一種快速而自然的定義網絡的方法,避免了手動編輯protobuf模型:
from caffe import layers as L from caffe import params as Pdef logreg(hdf5, batch_size):# logistic regression: data, matrix multiplication, and 2-class softmax lossn = caffe.NetSpec()n.data, n.label = L.HDF5Data(batch_size=batch_size, source=hdf5, ntop=2)n.ip1 = L.InnerProduct(n.data, num_output=2, weight_filler=dict(type='xavier'))n.accuracy = L.Accuracy(n.ip1, n.label)n.loss = L.SoftmaxWithLoss(n.ip1, n.label)return n.to_proto()train_net_path = 'examples/hdf5_classification/logreg_auto_train.prototxt' with open(train_net_path, 'w') as f:f.write(str(logreg('examples/hdf5_classification/data/train.txt', 10)))test_net_path = 'examples/hdf5_classification/logreg_auto_test.prototxt' with open(test_net_path, 'w') as f:f.write(str(logreg('examples/hdf5_classification/data/test.txt', 10)))現在,我們將定義“解算器”,該解算器通過指定上面定義的訓練和測試網絡的位置,以及用于學習、顯示和“快照”的各種參數的設置值來訓練網絡:
from caffe.proto import caffe_pb2def solver(train_net_path, test_net_path):s = caffe_pb2.SolverParameter()# Specify locations of the train and test networks.s.train_net = train_net_paths.test_net.append(test_net_path)s.test_interval = 1000 # Test after every 1000 training iterations.s.test_iter.append(250) # Test 250 "batches" each time we test.s.max_iter = 10000 # # of times to update the net (training iterations)# Set the initial learning rate for stochastic gradient descent (SGD).s.base_lr = 0.01 # Set `lr_policy` to define how the learning rate changes during training.# Here, we 'step' the learning rate by multiplying it by a factor `gamma`# every `stepsize` iterations.s.lr_policy = 'step's.gamma = 0.1s.stepsize = 5000# Set other optimization parameters. Setting a non-zero `momentum` takes a# weighted average of the current gradient and previous gradients to make# learning more stable. L2 weight decay regularizes learning, to help prevent# the model from overfitting.s.momentum = 0.9s.weight_decay = 5e-4# Display the current training loss and accuracy every 1000 iterations.s.display = 1000# Snapshots are files used to store networks we've trained. Here, we'll# snapshot every 10K iterations -- just once at the end of training.# For larger networks that take longer to train, you may want to set# snapshot < max_iter to save the network and training state to disk during# optimization, preventing disaster in case of machine crashes, etc.s.snapshot = 10000s.snapshot_prefix = 'examples/hdf5_classification/data/train'# We'll train on the CPU for fair benchmarking against scikit-learn.# Changing to GPU should result in much faster training!s.solver_mode = caffe_pb2.SolverParameter.CPUreturn ssolver_path = 'examples/hdf5_classification/logreg_solver.prototxt' with open(solver_path, 'w') as f:f.write(str(solver(train_net_path, test_net_path)))是時候查看學習和評估Python中的邏輯回歸的loss和擬合效果了:
%%timeit caffe.set_mode_cpu() solver = caffe.get_solver(solver_path) solver.solve()accuracy = 0 batch_size = solver.test_nets[0].blobs['data'].num test_iters = int(len(Xt) / batch_size) for i in range(test_iters):solver.test_nets[0].forward()accuracy += solver.test_nets[0].blobs['accuracy'].data accuracy /= test_itersprint("Accuracy: {:.3f}".format(accuracy))Accuracy: 0.770
Accuracy: 0.770
Accuracy: 0.770
Accuracy: 0.770
1 loop, best of 3: 195 ms per loop
通過命令行界面執行同樣的操作,以獲得關于模型和求解的詳細輸出:
!./build/tools/caffe train -solver examples/hdf5_classification/logreg_solver.prototxt I0224 00:32:03.232779 655 caffe.cpp:178] Use CPU. I0224 00:32:03.391911 655 solver.cpp:48] Initializing solver from parameters: train_net: "examples/hdf5_classification/logreg_auto_train.prototxt" test_net: "examples/hdf5_classification/logreg_auto_test.prototxt" ...... I0224 00:32:04.087514 655 solver.cpp:406] Test net output #0: accuracy = 0.77 I0224 00:32:04.087532 655 solver.cpp:406] Test net output #1: loss = 0.593815 (* 1 = 0.593815 loss) I0224 00:32:04.087541 655 solver.cpp:323] Optimization Done. I0224 00:32:04.087548 655 caffe.cpp:222] Optimization Done.如果查看輸出或logreg_auto_train.prototxt,您將看到該模型是簡單的邏輯回歸。
我們可以通過在接受輸入的權重和給出輸出的權重之間引入非線性,使其更高級一些——現在我們有了一個兩層網絡。
該網絡在nonlinear_auto_train.proto,txt中給出,這是t解算器中所做的唯一更改。我們現在將使用的新網絡的最終精度應高于邏輯回歸!
from caffe import layers as L from caffe import params as Pdef nonlinear_net(hdf5, batch_size):# one small nonlinearity, one leap for model kindn = caffe.NetSpec()n.data, n.label = L.HDF5Data(batch_size=batch_size, source=hdf5, ntop=2)# define a hidden layer of dimension 40n.ip1 = L.InnerProduct(n.data, num_output=40, weight_filler=dict(type='xavier'))# transform the output through the ReLU (rectified linear) non-linearityn.relu1 = L.ReLU(n.ip1, in_place=True)# score the (now non-linear) featuresn.ip2 = L.InnerProduct(n.ip1, num_output=2, weight_filler=dict(type='xavier'))# same accuracy and loss as beforen.accuracy = L.Accuracy(n.ip2, n.label)n.loss = L.SoftmaxWithLoss(n.ip2, n.label)return n.to_proto()train_net_path = 'examples/hdf5_classification/nonlinear_auto_train.prototxt' with open(train_net_path, 'w') as f:f.write(str(nonlinear_net('examples/hdf5_classification/data/train.txt', 10)))test_net_path = 'examples/hdf5_classification/nonlinear_auto_test.prototxt' with open(test_net_path, 'w') as f:f.write(str(nonlinear_net('examples/hdf5_classification/data/test.txt', 10)))solver_path = 'examples/hdf5_classification/nonlinear_logreg_solver.prototxt' with open(solver_path, 'w') as f:f.write(str(solver(train_net_path, test_net_path))) %%timeit caffe.set_mode_cpu() solver = caffe.get_solver(solver_path) solver.solve()accuracy = 0 batch_size = solver.test_nets[0].blobs['data'].num test_iters = int(len(Xt) / batch_size) for i in range(test_iters):solver.test_nets[0].forward()accuracy += solver.test_nets[0].blobs['accuracy'].data accuracy /= test_itersprint("Accuracy: {:.3f}".format(accuracy))Accuracy: 0.838
Accuracy: 0.837
Accuracy: 0.838
Accuracy: 0.834
1 loop, best of 3: 277 ms per loop
再次通過命令行界面執行同樣的操作,以獲得關于模型和求解的詳細輸出:
!./build/tools/caffe train -solver examples/hdf5_classification/nonlinear_logreg_solver.prototxt I0224 00:32:05.654265 658 caffe.cpp:178] Use CPU. I0224 00:32:05.810444 658 solver.cpp:48] Initializing solver from parameters: train_net: "examples/hdf5_classification/nonlinear_auto_train.prototxt" test_net: "examples/hdf5_classification/nonlinear_auto_test.prototxt" ...... I0224 00:32:06.078208 658 solver.cpp:406] Test net output #0: accuracy = 0.8388 I0224 00:32:06.078225 658 solver.cpp:406] Test net output #1: loss = 0.382042 (* 1 = 0.382042 loss) I0224 00:32:06.078234 658 solver.cpp:323] Optimization Done. I0224 00:32:06.078241 658 caffe.cpp:222] Optimization Done. # Clean up (comment this out if you want to examine the hdf5_classification/data directory). shutil.rmtree(dirname)相關參考:BVLC/caffe: Caffe: a fast open framework for deep learning. (github.com)
總結
以上是生活随笔為你收集整理的我的NVIDIA开发者之旅——Caffe教程(3)使用sklearn和caffe进行简单逻辑回归实践的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 学大数据找IT十八掌
- 下一篇: Caffe教程:训练自己的网络结构来分类