cs231n-assignment3的笔记
......
Q1: Image Captioning with Vanilla RNNs (25 points)
首先是rnn_step_forward,直接按照公式即可:
next_h = np.tanh(x.dot(Wx) + prev_h.dot(Wh) + b) # [N, H] cache = (x, prev_h, Wx, Wh, b, next_h)rnn_step_backward,根據(jù)tanh的求導(dǎo)公式:
可得:
x, prev_h, Wx, Wh, b, next_h = cache dtanh = dnext_h * (1 - next_h * next_h) # [N, H] db = np.sum(dtanh, axis=0) # [H, ] dWh = (prev_h.T).dot(dtanh) # [H, H] dWx = (x.T).dot(dtanh) # [D, H] dprev_h = dtanh.dot(Wh.T) # [N, H] dx = dtanh.dot(Wx.T) # [N, D]rnn_forward中調(diào)用rnn_step_forward,這個(gè)函數(shù)的執(zhí)行過(guò)程如下圖(源自課件):
代碼:
N, T, D = x.shape H = h0.shape[1] h = np.zeros((N, T, H)) prev_h = h0 for i in range(T):next_h, _ = rnn_step_forward(x[:, i, :], prev_h, Wx, Wh, b)prev_h = next_hh[:, i, :] = prev_h cache = (x, h0, Wh, Wx, b, h)rnn_backward中注意dh的shape為(N, T, H),也就是其匯總了上圖中每個(gè)h輸出后返回的梯度,經(jīng)過(guò)之前幾個(gè)assignment的折磨后現(xiàn)在寫起來(lái)很簡(jiǎn)單了。。。:
x, h0, Wh, Wx, b, h = cache N, T, D = x.shape dprev_h = np.zeros_like(h0) dx = np.zeros_like(x) dWx = np.zeros_like(Wx) dWh = np.zeros_like(Wh) db = np.zeros_like(b) for i in range(T):if i == T-1:prev_h = h0else:prev_h = h[:, T-i-2, :]next_h = h[:, T-i-1, :]cache2 = (x[:, T-i-1, :], prev_h, Wx, Wh, b, next_h)dnext_h = dh[:, T - i - 1, :] + dprev_hdx1, dprev_h, dWx1, dWh1, db1 = rnn_step_backward(dnext_h, cache2)dx[:, T-i-1, :] = dx1dWx += dWx1dWh += dWh1db += db1 dh0 = dprev_h接下來(lái)是word_embedding_forward,根據(jù)提示,利用python中numpy的index即可:
out = W[x, :] cache = (x, W)然后是word_embedding_backward,提示中要使用np.add.at,根據(jù)上面的前向傳播過(guò)程可知,dW應(yīng)該是其每個(gè)index使用量,所以:
x, W = cache dW = np.zeros_like(W) np.add.at(dW, x, dout)接下來(lái)就是這一問(wèn)中較為困難的RNN for image captioning,首先整理一下思路:
首先輸入為image features,也就是從vgg14和vgg16的fc7中提取出的向量,而這個(gè)features就作為rnn的初始h值,然后將ground_truth的captions中取出輸入與輸出,根據(jù)rnn的結(jié)構(gòu)可知,輸入為captions除最后一個(gè)之外的所有,輸出為captions除第一個(gè)之外的所有,然后將captions_in進(jìn)行word_embedding,注意,按照代碼可知,在這個(gè)實(shí)驗(yàn)中詞向量是通過(guò)訓(xùn)練得來(lái)的,也即是W_embed是訓(xùn)練來(lái)的。然后通過(guò)一個(gè)rnn_forward,affine層輸出score,計(jì)算loss,然后是反向計(jì)算梯度:
N, D = features.shape out, cache_affine = temporal_affine_forward(features.reshape((N, 1, D)), W_proj, b_proj) h0 = out.reshape((N, -1)) # (2) out_word, cache_word = word_embedding_forward(captions_in, W_embed) # (3) if self.cell_type == 'rnn':h_out, cache_out = rnn_forward(out_word, h0, Wx, Wh, b) elif self.cell_type == 'lstm':h_out, cache_out = lstm_forward(out_word, h0, Wx, Wh, b) else:raise ValueError('Invalid cell_type "%s" while running loss function' % self.cell_type) # (4) score, cache_score = temporal_affine_forward(h_out, W_vocab, b_vocab) # (5) mask = (captions_out != self._null) loss, dscore = temporal_softmax_loss(score, captions_out, mask)# backward dh_out, dW_vocab, db_vocab = temporal_affine_backward(dscore, cache_score) grads['W_vocab'] = dW_vocab grads['b_vocab'] = db_vocabif self.cell_type == 'rnn':dout_word, dh0, dWx, dWh, db = rnn_backward(dh_out, cache_out) elif self.cell_type == 'lstm':dout_word, dh0, dWx, dWh, db = lstm_backward(dh_out, cache_out) else:raise ValueError('Invalid cell_type "%s" while running loss function backward' % self.cell_type) grads['Wx'] = dWx grads['Wh'] = dWh grads['b'] = dbdW_embed = word_embedding_backward(dout_word, cache_word) grads['W_embed'] = dW_embeddfeatures, dW_proj, db_proj = temporal_affine_backward(dh0.reshape((N, 1, -1)), cache_affine) grads['W_proj'] = dW_proj grads['b_proj'] = db_projover
Q2: Image Captioning with LSTMs (30 points)
差不多的過(guò)程,按照公式來(lái)實(shí)現(xiàn)即可:
lstm_step_forward:
lstm_step_backward:
x, Wx, prev_h, Wh, H, a, prev_c, i, f, o, g, next_c = cache do = dnext_h * np.tanh(next_c) dnext_c = dnext_h * o * (1-np.tanh(next_c)*np.tanh(next_c)) + dnext_c df = dnext_c * prev_c dprev_c = dnext_c * f di = dnext_c * g dg = dnext_c * i dai = di * i * (1-i) daf = df * f * (1-f) dao = do * o * (1-o) dag = dg * (1 - g * g) da = np.concatenate((dai, daf, dao, dag), axis=1) # print(da.shape) db = np.sum(da, axis=0) dx = da.dot(Wx.T) dWx = x.T.dot(da) dprev_h = da.dot(Wh.T) dWh = prev_h.T.dot(da)lstm_forward:
N, T, D = x.shape _, H = h0.shape prev_h = h0 prev_c = np.zeros_like(prev_h) h = np.zeros((N, T, H)) cache = [] for i in range(T):next_h, next_c, cache1 = lstm_step_forward(x[:, i, :], prev_h, prev_c, Wx, Wh, b)h[:, i, :] = next_hprev_h = next_hprev_c = next_ccache.append(cache1)lstm_backward:
N, T, H = dh.shape x = cache[T-1][0] _, D = x.shape dx = np.zeros((N, T, D)) dWx = np.zeros((D, 4 * H)) dWh = np.zeros((H, 4 * H)) db = np.zeros(4 * H) dnext_h = np.zeros((N, H)) dnext_c = np.zeros((N, H)) for i in range(T):dnext_h = dnext_h + dh[:, T-i-1, :]dx1, dprev_h, dprev_c, dWx1, dWh1, db1 = lstm_step_backward(dnext_h, dnext_c, cache[T-i-1])dx[:, T-i-1, :] = dx1dWx = dWx + dWx1dWh = dWh + dWh1db = db + db1dnext_c = dprev_cdnext_h = dprev_h dh0 = dnext_hsample:
N, D = features.shape out_affine, cache_affine = temporal_affine_forward(features.reshape((N, 1, D)), W_proj, b_proj) h0 = out_affine.reshape((N, -1)) captions[:, 0] = self._start prev_h = h0 prev_c = np.zeros_like(prev_h) word_index = captions[:, 0] word_embed = W_embed[word_index] for i in range(1, max_length):if self.cell_type == 'rnn':# pass next_h, cache = rnn_step_forward(word_embed, prev_h, Wx, Wh, b)elif self.cell_type == 'lstm':# pass next_h, next_c, cache = lstm_step_forward(word_embed, prev_h, prev_c, Wx, Wh, b)prev_c = next_celse:raise ValueError('Invalid cell_type "%s" while running sample function' % self.cell_type)out_vocab, cache_vocab = affine_forward(next_h, W_vocab, b_vocab)captions[:, i] = list(np.argmax(out_vocab, axis=1))word_index = captions[:, i]word_embed = W_embed[word_index]prev_h = next_hQ3: Network Visualization: Saliency maps, Class Visualization, and Fooling Images (15 points)
在載入squeezenet.ckpt時(shí)有可能會(huì)報(bào)錯(cuò),把if判斷語(yǔ)句刪掉即可
Saliency Maps中首先求梯度,然后更新,按照說(shuō)明來(lái)寫即可:
Fooling Images:
for i in range(100):target_score = tf.gather_nd(model.classifier, tf.stack((tf.range(X.shape[0]), model.labels), axis=1))pred_label = tf.argmax(model.classifier, axis=1)dX_fool_grad = tf.gradients(target_score, model.image)dX_t = learning_rate * dX_fool_grad / tf.norm(dX_fool_grad)dX, pred_label2 = sess.run([dX_t, pred_label], feed_dict={model.image:X_fooling, model.labels:[target_y]})print("pred_label:",pred_label2)if pred_label2[0]==target_y:print("finish step:", i)return X_foolingelse:print("step", i)X_fooling = X_fooling + dX[0]Class visualization:
循環(huán)外,這里的大坑是l2_reg和learning_rate,直接使用原來(lái)的變量的話在tensorflow中計(jì)算時(shí)會(huì)出問(wèn)題,會(huì)導(dǎo)致dx_t的第一維度變成25......:
scores = tf.gather_nd(model.classifier, tf.stack((tf.range(X.shape[0]), model.labels), axis=1)) l2_reg = tf.constant(l2_reg) lr = tf.constant(learning_rate,dtype=tf.float32) l2_norm = tf.norm(model.image) loss = scores - l2_reg * l2_norm * l2_norm grad = tf.gradients(loss, model.image) dx_t = lr * grad / tf.norm(grad)循環(huán)內(nèi):
dx = sess.run(dx_t, feed_dict={model.image:X, model.labels:[target_y]}) X = X + dx[0]生成的詭異圖片。。。:
Q4: Style Transfer (15 points)
待續(xù)
Q5: Generative Adversarial Networks (15 points)
待續(xù)
總結(jié)
以上是生活随笔為你收集整理的cs231n-assignment3的笔记的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 构建北京市政百姓信件分析实战案例
- 下一篇: vue判断有没有滚动条