机器学习实验(十):基于WiFi fingerprints用自编码器(Autoencoders)和神经网络(Neural Network)进行定位_1(tensorflow版)
聲明:版權所有,轉載請聯系作者并注明出處??http://blog.csdn.net/u013719780?viewmode=contents
Autoencoders and Neural Network for Place recognition with WiFi fingerprints
本文來源于Micha? Nowicki?and Jan Wietrzykowski?論文的讀書筆記
論文原文:https://arxiv.org/pdf/1611.02049v1.pdf
現實世界的很多場景需要知道用戶位置以便為他們提供某些服務。因此,自動用戶定位一直是近年來的研究熱點。自動用戶定位包括估算用戶的位置(緯度、經度和海拔)。由于有包括GPS傳感器等連接在移動設備上,解決室外定位問題比較容易。然而,室內定位還面臨這多困難,仍然是一個懸而未決的問題,主要是由于在室內環境中GPS信號有損失。
本文使用數據集UJIIndoorLoc進行實驗。數據集的詳細信息如下:
Attribute Information:
Attribute 001 (WAP001): Intensity value for WAP001. Negative integer values from -104 to 0 and +100. Positive value 100 used if WAP001 was not detected.
....
Attribute 520 (WAP520): Intensity value for WAP520. Negative integer values from -104 to 0 and +100. Positive Vvalue 100 used if WAP520 was not detected.
Attribute 521 (Longitude): Longitude. Negative real values from -7695.9387549299299000 to -7299.786516730871000
Attribute 522 (Latitude): Latitude. Positive real values from 4864745.7450159714 to 4865017.3646842018.
Attribute 523 (Floor): Altitude in floors inside the building. Integer values from 0 to 4.
Attribute 524 (BuildingID): ID to identify the building. Measures were taken in three different buildings. Categorical integer values from 0 to 2.
Attribute 525 (SpaceID): Internal ID number to identify the Space (office, corridor, classroom) where the capture was taken. Categorical integer values.
Attribute 526 (RelativePosition): Relative position with respect to the Space (1 - Inside, 2 - Outside in Front of the door). Categorical integer values.
Attribute 527 (UserID): User identifier (see below). Categorical integer values.
Attribute 528 (PhoneID): Android device identifier (see below). Categorical integer values.
Attribute 529 (Timestamp): UNIX Time when the capture was taken. Integer value.
UserID Anonymized user Height (cm)?
0 USER0000 (Validation User) N/A
1 USER0001 170
2 USER0002 176
3 USER0003 172
4 USER0004 174
5 USER0005 184
6 USER0006 180
7 USER0007 160
8 USER0008 176
9 USER0009 177
10 USER0010 186
11 USER0011 176
12 USER0012 158
13 USER0013 174
14 USER0014 173
15 USER0015 174
16 USER0016 171
17 USER0017 166
18 USER0018 162?
PhoneID Android Device Android Ver. UserID?
0 Celkon A27 4.0.4(6577) 0
1 GT-I8160 2.3.6 8
2 GT-I8160 4.1.2 0
3 GT-I9100 4.0.4 5
4 GT-I9300 4.1.2 0
5 GT-I9505 4.2.2 0
6 GT-S5360 2.3.6 7
7 GT-S6500 2.3.6 14
8 Galaxy Nexus 4.2.2 10
9 Galaxy Nexus 4.3 0
10 HTC Desire HD 2.3.5 18
11 HTC One 4.1.2 15
12 HTC One 4.2.2 0
13 HTC Wildfire S 2.3.5 0,11
14 LT22i 4.0.4 0,1,9,16
15 LT22i 4.1.2 0
16 LT26i 4.0.4 3
17 M1005D 4.0.4 13
18 MT11i 2.3.4 4
19 Nexus 4 4.2.2 6
20 Nexus 4 4.3 0
21 Nexus S 4.1.2 0
22 Orange Monte Carlo 2.3.5 17
23 Transformer TF101 4.0.3 2
24 bq Curie 4.1.1 12
Dividing UJIndoorLoc training data set into training and validation set
In?[3]: train_val_split = np.random.rand(len(features)) < 0.70 train_x = features[train_val_split] train_y = labels[train_val_split] val_x = features[~train_val_split] val_y = labels[~train_val_split]Using UJIndoorLoc validation data set as testing set
In?[4]: test_dataset = pd.read_csv("validationData.csv",header = 0) test_features = scale(np.asarray(test_dataset.ix[:,0:520])) test_labels = np.asarray(test_dataset["BUILDINGID"].map(str) + test_dataset["FLOOR"].map(str)) test_labels = np.asarray(pd.get_dummies(test_labels)) /Applications/anaconda/lib/python2.7/site-packages/sklearn/utils/validation.py:420: DataConversionWarning: Data with input dtype int64 was converted to float64 by the scale function.warnings.warn(msg, DataConversionWarning) In?[5]: def weight_variable(shape):initial = tf.truncated_normal(shape, stddev = 0.1)return tf.Variable(initial)def bias_variable(shape):initial = tf.constant(0.0, shape = shape)return tf.Variable(initial) In?[6]: n_input = 520 n_hidden_1 = 256 n_hidden_2 = 128 n_hidden_3 = 64 n_classes = labels.shape[1]learning_rate = 0.01 training_epochs = 20 batch_size = 10total_batches = dataset.shape[0] // batch_size In?[7]: X = tf.placeholder(tf.float32, shape=[None,n_input]) Y = tf.placeholder(tf.float32,[None,n_classes])# --------------------- Encoder Variables --------------- #e_weights_h1 = weight_variable([n_input, n_hidden_1]) e_biases_h1 = bias_variable([n_hidden_1])e_weights_h2 = weight_variable([n_hidden_1, n_hidden_2]) e_biases_h2 = bias_variable([n_hidden_2])e_weights_h3 = weight_variable([n_hidden_2, n_hidden_3]) e_biases_h3 = bias_variable([n_hidden_3])# --------------------- Decoder Variables --------------- #d_weights_h1 = weight_variable([n_hidden_3, n_hidden_2]) d_biases_h1 = bias_variable([n_hidden_2])d_weights_h2 = weight_variable([n_hidden_2, n_hidden_1]) d_biases_h2 = bias_variable([n_hidden_1])d_weights_h3 = weight_variable([n_hidden_1, n_input]) d_biases_h3 = bias_variable([n_input])# --------------------- DNN Variables ------------------ #dnn_weights_h1 = weight_variable([n_hidden_3, n_hidden_2]) dnn_biases_h1 = bias_variable([n_hidden_2])dnn_weights_h2 = weight_variable([n_hidden_2, n_hidden_2]) dnn_biases_h2 = bias_variable([n_hidden_2])dnn_weights_out = weight_variable([n_hidden_2, n_classes]) dnn_biases_out = bias_variable([n_classes]) In?[8]: def encode(x):l1 = tf.nn.tanh(tf.add(tf.matmul(x,e_weights_h1),e_biases_h1))l2 = tf.nn.tanh(tf.add(tf.matmul(l1,e_weights_h2),e_biases_h2))l3 = tf.nn.tanh(tf.add(tf.matmul(l2,e_weights_h3),e_biases_h3))return l3def decode(x):l1 = tf.nn.tanh(tf.add(tf.matmul(x,d_weights_h1),d_biases_h1))l2 = tf.nn.tanh(tf.add(tf.matmul(l1,d_weights_h2),d_biases_h2))l3 = tf.nn.tanh(tf.add(tf.matmul(l2,d_weights_h3),d_biases_h3))return l3def dnn(x):l1 = tf.nn.tanh(tf.add(tf.matmul(x,dnn_weights_h1),dnn_biases_h1))l2 = tf.nn.tanh(tf.add(tf.matmul(l1,dnn_weights_h2),dnn_biases_h2))out = tf.nn.softmax(tf.add(tf.matmul(l2,dnn_weights_out),dnn_biases_out))return out In?[9]: encoded = encode(X) decoded = decode(encoded) y_ = dnn(encoded) In?[10]: us_cost_function = tf.reduce_mean(tf.pow(X - decoded, 2)) s_cost_function = -tf.reduce_sum(Y * tf.log(y_)) us_optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(us_cost_function) s_optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(s_cost_function) In?[11]: correct_prediction = tf.equal(tf.argmax(y_,1), tf.argmax(Y,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))模型結構
圖片取自論文原文:?https://arxiv.org/pdf/1611.02049v1.pdf
總結
以上是生活随笔為你收集整理的机器学习实验(十):基于WiFi fingerprints用自编码器(Autoencoders)和神经网络(Neural Network)进行定位_1(tensorflow版)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 项目-20-开发社区搜索功能
- 下一篇: Office2007 SP1/2补丁对应