日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 运维知识 > windows >内容正文

windows

人类视觉系统_对人类视觉系统的对抗攻击

發(fā)布時間:2023/12/15 windows 24 豆豆
生活随笔 收集整理的這篇文章主要介紹了 人类视觉系统_对人类视觉系统的对抗攻击 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

人類視覺系統(tǒng)

Neural Networks are exceptionally good at recognizing objects shown in an image and in many cases, they have shown superhuman levels of accuracy(E.g.-Traffic sign recognition).

神經(jīng)網(wǎng)絡(luò)非常擅長識別圖像中顯示的物體,并且在許多情況下,它們已經(jīng)顯示出超人的準(zhǔn)確度(例如,交通標(biāo)志識別)。

But they are also known to have an interesting property where we can introduce some small changes to the input photo and have the Neural Network wrongly classify it into something completely different. Such attacks are known as adversarial attacks on a Neural Network. One important variant is known as the Fast Gradient sign method, by Ian GoodFellow et al, as seen in the paper Explaining and Harnessing Adversarial Examples. If properly implemented, such methods can add noise to the image barely perceptible to the human eye but it fools the Neural Network classifier. One classic example is shown below.

但是也眾所周知,它們具有有趣的屬性,我們可以在輸入的照片上進行一些細(xì)微的更改,并使神經(jīng)網(wǎng)絡(luò)將其錯誤地分類為完全不同的東西。 這種攻擊被稱為對神經(jīng)網(wǎng)絡(luò)的對抗攻擊。 一個重要的變體被Ian GoodFellow等人稱為快速梯度符號法,如論文“ 解釋和利用對抗性示例”中所見。 如果實施得當(dāng),此類方法可能會給人眼幾乎看不到的圖像增加噪點,但卻使神經(jīng)網(wǎng)絡(luò)分類器蒙昧。 下面顯示了一個經(jīng)典示例。

Explaining and Harnessing Adversarial Examples(Goodfellow et al)解釋和利用對抗示例 (Goodfellow等)

See here how just altering just one pixel can cause a Neural Network to make wrong classifications. So an obvious conclusion we might be tempted to make is that Neural Nets are weaker than human vision and such attacks can never fool the human eye. But according to the paper titled Adversarial Examples that Fool both Computer Vision and Time-Limited Humans by Elsayed et al it turns out some properties of Machine Vision can also be used to fool the human visual system.

此處僅更改一個像素如何導(dǎo)致神經(jīng)網(wǎng)絡(luò)做出錯誤的分類。 因此,我們可能會想得出一個顯而易見的結(jié)論,那就是神經(jīng)網(wǎng)絡(luò)比人類的視力還弱,并且這種攻擊永遠(yuǎn)不會愚弄人的眼睛。 但是根據(jù)Elsayed等人題為“ 愚弄計算機視覺和限時人類的對抗示例”的論文,事實證明,機器視覺的某些屬性也可用于愚弄人類視覺系統(tǒng)。

提示轉(zhuǎn)移到人類視覺的一些線索: (Some clues suggesting transfer to human vision:)

The fact that an adversarial image that fools one model can also be used to fool other models enables researchers to perform Black box attacks where attacker does not have access to the model. It has been shown by Liu et al that transferability of adversarial examples can be greatly improved by optimizing it to fool many Machine Learning models. Moreover, recent studies on stronger adversarial attacks that transfer across multiple settings have sometimes produced adversarial examples that appear far more meaningful to human observers.

愚弄一個模型的對抗圖像也可以用來愚弄其他模型的事實,使研究人員能夠在攻擊者無法訪問模型的情況下進行黑匣子攻擊 。 Liu等人已經(jīng)表明,通過優(yōu)化欺騙性示例以欺騙許多機器學(xué)習(xí)模型,可以大大提高對抗性示例的可傳遞性。 此外,最近有關(guān)在多種情況下轉(zhuǎn)移的更強對抗攻擊的研究有時會產(chǎn)生對抗示例,這對于人類觀察者而言似乎更有意義。

生物與人工視覺: (Biological vs Artificial Vision:)

PixabayPixabay

Similarities:

相似之處:

Recent research has found similarities between Deep CNNs and the primate vision system. Certain activities in Deep CNNs are similar to the visual pathways of primates.

最近的研究發(fā)現(xiàn),深層CNN與靈長類動物視覺系統(tǒng)之間存在相似之處。 深度CNN中的某些活動類似于靈長類動物的視覺通路。

Differences:

差異:

Images are typically presented to CNNs as a static rectangular grid while the eye has eccentricity dependent on spatial resolution. Resolution is high in the central visual field and falls off with increasing eccentricity. Besides there are major computational differences.

圖像通常以靜態(tài)矩形網(wǎng)格的形式呈現(xiàn)給CNN,而眼睛的偏心率取決于空間分辨率。 在中央視野中,分辨率很高,并且隨著離心率的增加而降低。 此外,在計算上也存在重大差異。

數(shù)據(jù)生成過程的說明: (Description of the Data generating Process:)

Dataset:

資料集:

Images were taken from the Imagenet dataset containing 1000 highly specific classes. Some are too specific for the typical human annotator to identify.So these were combined to six coarse classes.

圖像是從Imagenet數(shù)據(jù)集中獲取的,其中包含1000個高度特定的類別。 有些注釋對于典型的人類注釋者而言太具體了,因此將它們分為六個粗略的類別。

Model:

模型:

An Ensemble of k CNN models were trained on the dataset. Each model was also prepended with a retinal layer which preprocesses the input and includes some transformations performed by the human eye. In this layer eccentricity dependent blurring of the image is done to approximate the input received by the human visual cortex.

在數(shù)據(jù)集上訓(xùn)練了k個CNN模型的集合。 每個模型還都帶有視網(wǎng)膜層,該視網(wǎng)膜層對輸入進行預(yù)處理,并包括人眼執(zhí)行的某些轉(zhuǎn)換。 在這一層中,完成了圖像的偏心率相關(guān)模糊處理,以近似人類視覺皮層接收的輸入。

Generating Images:

生成圖像:

The main goal is to generate targeted examples for each group that strongly transfers across models. So for each class (A,B) the authors have generated perturbations such that the models wrongly classifies images from A to B and also similar images are constructed to wrongly classify from B to A.

主要目標(biāo)是為在模型之間進行強烈轉(zhuǎn)移的每個組生成目標(biāo)示例。 因此,對于每個類別(A,B),作者都產(chǎn)生了擾動,使得模型將圖像從A錯誤地分類為B,并且還構(gòu)建了相似的圖像以從B錯誤地分類為A。

結(jié)果: (Results:)

paper(Elsayed et al)本文補充部分(Elsayed等)

The image above is a typical image generated using this process. The first image looks like a cat while the second one looks like a dog. But strangely the second image is also a cat with some carefully crafted adversarial noise that makes us humans perceive it as a dog. The obvious changes made to the image is the fact that the nose appears to be longer and thicker but several feline features are also retained like whiskers in spite of which we see a dog.

上面的圖像是使用此過程生成的典型圖像。 第一張圖片看起來像貓,第二張圖片看起來像狗。 但是奇怪的是,第二張圖片也是一只貓,上面有一些精心制作的對抗性噪音,使我們?nèi)祟悓⑵湟暈楣贰?對圖像進行的明顯改變是,鼻子看起來更長,更粗,但也保留了一些貓科動物的特征,如胡須,盡管我們看到了一只狗。

The main takeaways are:

主要的收獲是:

  • Attacks created by the method mentioned in the paper transfer very well across different machine vision systems.

    本文中提到的方法所產(chǎn)生的攻擊在不同的機器視覺系統(tǒng)之間可以很好地傳遞。
  • Among humans the authors have experimentally proved using human annotators that the attack influences the choice between incorrect classes as well as increases the human error rate.

    在人類中,作者使用人類注釋器通過實驗證明了攻擊會影響錯誤類別之間的選擇,并增加人類錯誤率。
  • paper(Elsayed et al)論文中所見,由生成網(wǎng)絡(luò)執(zhí)行的不同操作的一些示例(Elsayed等)

    結(jié)論: (Conclusion:)

    In this work, the authors have shown that adversarial examples can fool multiple vision systems as well as time-limited humans. This provides some evidence of striking similarity between the machine and the human visual system. This can create avenues for further research in both Neuroscience and Computer Science.

    在這項工作中,作者已經(jīng)表明,對抗性例子可以欺騙多個視覺系統(tǒng)以及有時間限制的人類。 這提供了機器與人類視覺系統(tǒng)之間驚人相似性的一些證據(jù)。 這可以為神經(jīng)科學(xué)和計算機科學(xué)的進一步研究創(chuàng)造途徑。

    翻譯自: https://towardsdatascience.com/adversarial-attacks-on-the-human-visual-system-38809d53dec1

    人類視覺系統(tǒng)

    總結(jié)

    以上是生活随笔為你收集整理的人类视觉系统_对人类视觉系统的对抗攻击的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。