日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 人工智能 > pytorch >内容正文

pytorch

CV之FRec之ME/LF:人脸识别中常用的模型评估指标/损失函数(Triplet Loss、Center Loss)简介、使用方法之详细攻略

發(fā)布時(shí)間:2025/3/21 pytorch 30 豆豆
生活随笔 收集整理的這篇文章主要介紹了 CV之FRec之ME/LF:人脸识别中常用的模型评估指标/损失函数(Triplet Loss、Center Loss)简介、使用方法之详细攻略 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

CV之FRec之ME/LF:人臉識(shí)別中常用的模型評(píng)估指標(biāo)/損失函數(shù)(Triplet Loss、Center Loss)簡(jiǎn)介、使用方法之詳細(xì)攻略

目錄

T1、Triplet Loss

1、英文原文解釋

2、代碼實(shí)現(xiàn)

T2、Center loss

1、英文原文解釋

2、代碼實(shí)現(xiàn)


T1、Triplet Loss

FaceNet: A Unified Embedding for Face Recognition and Clustering
https://arxiv.org/pdf/1503.03832.pdf
http://www.goodtimesweb.org/surveillance/2015/1503.03832v1.pdf

1、英文原文解釋

? ? ? ?Triplet Loss The embedding is represented by f(x) ∈ R d . It embeds an image x into a d-dimensional Euclidean space. Additionally, we constrain this embedding to live on the d-dimensional hypersphere, i.e. kf(x)k2 = 1. This loss is motivated in [19] in the context of nearest-neighbor classifi- cation. Here we want to ensure that an image x a i (anchor) of a specific person is closer to all other images x p i (positive) of the same person than it is to any image x n i (negative) of any other person. This is visualized in Figure 3. Thus we want, kx a i ? x p i k 2 2 + α < kx a i ? x n i k 2 2 , ? (x a i , x p i , xn i ) ∈ T , (1)

三聯(lián)體損耗嵌入由f(x)∈R d表示。它將圖像x嵌入到d維歐幾里得空間中。另外,我們將這個(gè)嵌入限制在d維超球面上,即kf(x)k2 = 1。這種損失是在[19]中最近鄰分類(lèi)的背景下產(chǎn)生的。在這里,我們要確保一個(gè)特定的人的圖像x a i(錨點(diǎn))是更接近所有其他圖像x p i(積極的)是同一個(gè)人比它是任何圖像x n i(消極的)任何其他的人。如圖3所示。因此我們希望,kx 2我?x p k 2 +α< kx 2?x n我k 2,?(x, x p i, xn i)∈T (1)

? ? ? ?where α is a margin that is enforced between positive and negative pairs. T is the set of all possible triplets in the training set and has cardinality N. The loss that is being minimized is then L = X N i h kf(x a i ) ? f(x p i )k 2 2 ? kf(x a i ) ? f(x n i )k 2 2 + α i + . (2) Generating all possible triplets would result in many triplets that are easily satisfied (i.e. fulfill the constraint in Eq. (1)). These triplets would not contribute to the training and result in slower convergence, as they would still be passed through the network. It is crucial to select hard triplets, that are active and can therefore contribute to improving the model. The following section talks about the different approaches we use for the triplet selection.

其中α是一個(gè)利潤(rùn)率之間執(zhí)行積極的和消極的對(duì)。T是在訓(xùn)練集的集合所有可能的三胞胎,基數(shù)N的損失最小化是我L = X N h kf (X我)?f (X p i) k 2 2?kf (X我)?f (X N i) k 2 + 2 +α。(2)生成所有可能的三胞胎會(huì)產(chǎn)生許多容易滿足的三胞胎(即滿足式(1)中的約束條件)。這些三胞胎將不會(huì)有助于訓(xùn)練,并導(dǎo)致較慢的收斂,因?yàn)樗麄內(nèi)匀粫?huì)通過(guò)網(wǎng)絡(luò)。關(guān)鍵是要選擇硬三胞胎,這是積極的,因此可以有助于改善模型。下面的部分將討論我們?cè)谌剡x擇中使用的不同方法。

2、代碼實(shí)現(xiàn)

triplet_loss (anchor, positive, negative, alpha): #(隨機(jī)選取的人臉樣本的特征,anchor的正、負(fù)樣本的特征)#它們的形狀都是(batch_size,feature_size),feature_size是網(wǎng)絡(luò)學(xué)習(xí)的人臉特征的維數(shù)"""Calculate the triplet loss according to the FaceNet paperwith tf.variable_scope('triplet_loss'):pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), 1)#pos_dist就是anchor到各自正樣本之間的距離neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), 1)#neg_dist是anchor到負(fù)樣本的距離basic_loss = tf.add(tf.subtract(pos_dist,neg_dist), alpha)#用pos_dist減去neg_dist再加上一個(gè)alpha,最終損失只計(jì)算大于0的部分loss = tf.reduce_mean(tf.maximum(basic_loss, 0.0), 0)

T2、Center loss

《A Discriminative Feature Learning Approach for Deep Face Recognition》

http://ydwen.github.io/papers/WenECCV16.pdf

1、英文原文解釋

? ? ? The Center Loss So, how to develop an effective loss function to improve the discriminative power of the deeply learned features? Intuitively, minimizing the intra-class variations while keeping the features of different classes separable is the key. To this end, we propose the center loss function, as formulated in Eq. 2. LC = 1 2 m i=1 xi ? cyi 2 2 (2)那么,如何建立一個(gè)有效的損失函數(shù)來(lái)提高學(xué)習(xí)特征的辨別力呢?直觀地說(shuō),在保持不同類(lèi)的特性可分離的同時(shí)最小化類(lèi)內(nèi)的變化是關(guān)鍵。為此,我們提出了如式2所示的中心損失函數(shù)。LC =1 2 m i=1 xi?cyi 2 2 (2)
?
?? ??The cyi ∈ Rd denotes the yith class center of deep features. The formulation effectively characterizes the intra-class variations. Ideally, the cyi should be updated as the deep features changed. In other words, we need to take the entire training set into account and average the features of every class in each iteration, which is inefficient even impractical. Therefore, the center loss can not be used directly. This is possibly the reason that such a center loss has never been used in CNNs until now. ? ? ? To address this problem, we make two necessary modifications. First, instead of updating the centers with respect to the entire training set, we perform the update based on mini-batch. In each iteration, the centers are computed by averaging the features of the corresponding classes (In this case, some of the centers may not update). Second, to avoid large perturbations caused by few mislabelled samples, we use a scalar α to control the learning rate of the centers.cyi∈Rd表示深度特征的yith類(lèi)中心。這個(gè)公式有效地描述了階級(jí)內(nèi)部的變化。理想情況下,cyi應(yīng)該隨著深度特性的變化而更新。換句話說(shuō),我們需要考慮整個(gè)訓(xùn)練集,并在每次迭代中平均每個(gè)類(lèi)的特性,這是低效甚至不切實(shí)際的。因此,中心損失不能直接使用。這可能就是為什么在CNNs中從未使用過(guò)這種中心丟失的原因。為了解決這個(gè)問(wèn)題,我們做了兩個(gè)必要的修改。首先,我們不是根據(jù)整個(gè)訓(xùn)練集更新中心,而是基于mini-batch執(zhí)行更新。在每次迭代中,通過(guò)平均對(duì)應(yīng)類(lèi)的特性來(lái)計(jì)算中心(在這種情況下,一些中心可能不會(huì)更新)。第二,避免大擾動(dòng)引起的幾貼樣品,我們用一個(gè)標(biāo)量α控制中心的學(xué)習(xí)速率。

2、代碼實(shí)現(xiàn)

center_lossfeatures, label, alfa, nrof_classes#features是樣本的特征,形狀為(batch size,feature size) nrof_features = features.get_shape()[1] #nrof_features就是feature_size ,即神經(jīng)網(wǎng)絡(luò)計(jì)算人臉的維數(shù)#centers為變量,它是各個(gè)類(lèi)別對(duì)應(yīng)的類(lèi)別中心centers = tf.get_variable('centers', [nrof_classes, nrof_features], dtype=tf.float32,initializer=tf.constant_initializer(0), trainable=False)label = tf.reshape(label, [-1])centers_batch = tf.gather(centers, label) #根據(jù)label,取出features中每一個(gè)樣本對(duì)應(yīng)的類(lèi)別中心#centers_batch應(yīng)該和features的形狀一致,為(batch size,feature size)diff = (1 - alfa) * (centers_batch - features) #計(jì)算類(lèi)別中心和各個(gè)樣本特征的差距diff,diff用來(lái)更新各個(gè)類(lèi)別中心的位置,計(jì)算diff時(shí)用到的alfa是一個(gè)超參數(shù),它可以控制中心位置的更新幅度centers = tf.scatter_sub(centers, label, diff) #diff來(lái)重新中心loss = tf.reduce_mean(tf.square(features - centers_batch)) #計(jì)算lossreturn loss, centers #返回loss和更新后的中心

總結(jié)

以上是生活随笔為你收集整理的CV之FRec之ME/LF:人脸识别中常用的模型评估指标/损失函数(Triplet Loss、Center Loss)简介、使用方法之详细攻略的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。