日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

xml格式是什么示例_什么是对抗示例?

發布時間:2023/12/15 编程问答 28 豆豆
生活随笔 收集整理的這篇文章主要介紹了 xml格式是什么示例_什么是对抗示例? 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

xml格式是什么示例

In recent times, Machine Learning (a subset of Artificial Intelligence) has been at the forefront of technological advancement. It appears as though it is a strong contender for being the tool that could catapult human abilities and efficiency to the next level.

近年來,機器學習(人工智能的一個子集)一直處于技術進步的最前沿。 似乎它是可以將人類能力和效率提升到新水平的工具的有力競爭者。

While Machine Learning is the term that is commonly used, it is a rather large subset within the realm of AI. Most of the best machine learning based systems in use today actually belong to a subset of Machine Learning known as Deep Learning. The term Deep Learning is used to refer to a Machine learning approach that aims to mimic the functioning of the human brain to some extent. This helps bestow upon machines the power to perform certain tasks that humans can, such as object detection, object classification and much more. The Deep Learning models that are used to achieve this are often known as Neural Networks (since they try to replicate the functioning of the neural connections in the brain).

雖然機器學習是一個常用的術語,但它是AI領域中的一個相當大的子集。 當今使用的大多數最佳的基于機器學習的系統實際上都屬于稱為深度學習的機器學習的子集。 術語“ 深度學習”用于表示旨在在某種程度上模仿人腦功能的機器學習方法。 這有助于賦予機器執行人類可以執行的某些任務的能力,例如對象檢測,對象分類等等。 用于實現此目標的深度學習模型通常稱為神經網絡 (因為它們試圖復制大腦中神經連接的功能)。

Just like any other software, however, Neural Networks also come with their own set of vulnerabilities, and it is important for us to acknowledge these vulnerabilities so that the ethical considerations of the same can be kept in mind when further work in the field is carried out. In recent times, the vulnerability that has gained the most prominence are known as Adversarial Examples. This article aims to shed some light on the nature of Adversarial Examples and some of the ethical concerns that arise with the development of deep learning products as a result of these vulnerabilities.

但是,就像其他任何軟件一樣,神經網絡也具有其自身的漏洞,對我們來說重要的是要認識到這些漏洞,以便在進行進一步的工作時可以牢記該漏洞的道德考慮。出來。 最近,最受關注的漏洞被稱為“ 對抗示例”。 本文旨在闡明對抗性示例的性質,以及由于這些漏洞而在深度學習產品開發過程中引起的一些道德關注。

什么是對抗示例? (What are Adversarial Examples?)

The “regular” computer systems that most of us are familiar with can be attacked by hackers, and in the same way, Adversarial Examples can be thought of as a way of “attacking” a deep learning model. The concept of Adversarial Examples are best explained by taking the example of an Image Classification Neural Network. Image classification networks identify features of images through the training dataset and are later able to identify what is present in a new image that they have not seen in the past. Researchers have identified that it is possible to apply a “perturbation” to the image in such a way that the change is so small that it cannot be noticed by the human eye, but it completely changes the prediction made by the Machine Learning model.

我們大多數人都熟悉的“常規”計算機系統可能會受到黑客的攻擊,并且以同樣的方式,“對抗性示例”也可以被視為“攻擊”深度學習模型的一種方式。 以圖像分類神經網絡為例,可以最好地解釋對抗性示例的概念。 圖像分類網絡通過訓練數據集識別圖像的特征,并且以后能夠識別新圖像中存在的,以前從未看到的內容。 研究人員已經確定,可以對圖像施加“擾動”,使得變化很小,以至于人眼無法察覺,但是它完全改變了機器學習模型所做的預測。

The most famous example is of an Adversarial example generated for the GoogLeNet model (Szegedy et al., 2014) that was trained on the ImageNet dataset.

最著名的例子是針對GoogLeNet模型(Szegedy等人,2014)生成的對抗性示例,該模型在ImageNet數據集上進行了訓練。

Source: 資料來源: Explaining and Harnessing Adversarial Examples by I.J.Goodfellow, J.Shlens & C.SzegedyIJGoodfellow,J.Shlens和C.Szegedy的解釋和利用對抗示例

As can be seen in the image above, the GoogLeNet model predicted that the initial image was a Panda with a confidence of 57.7%, however, after adding the slight perturbation, even though there is no apparent visual change in the image, the model now classifies it as a Gibbon with a confidence of 99.3%.

從上圖可以看出,GoogLeNet模型預測初始圖像是熊貓 ,置信度為57.7%,但是,在添加了微擾之后,即使圖像中沒有明顯的視覺變化,該模型現在將其分類為長臂猿 ,可信度為99.3%。

The perturbation added above might appear to be a random assortment of pixels, however, in reality, each of the pixels in the perturbation have a value (represented as a color) that is calculated using complicated Mathematical algorithms. Adversarial Examples are not limited just to image classification models; they can also be used with audio and other types of files, however, the underlying principle remains the same as what has been explained above.

上面添加的擾動似乎是像素的隨機分類,但是,實際上,擾動中的每個像素都有一個值(表示為顏色),該值是使用復雜的數學算法計算得出的。 對抗示例不僅限于圖像分類模型; 它們也可以與音頻和其他類型的文件一起使用,但是其基本原理與上面說明的相同。

There are many different algorithms that have varying degrees of success on different types of models, and an implementation of many of these can be found in the Celeverhans library (Papernot et al.)

有許多不同的算法在不同類型的模型上具有不同程度的成功,可以在Celeverhans庫中找到其中的許多實現方法(Papernot等人)。

Generally, Adversarial attacks can be classified into one of two types:

通常,對抗性攻擊可分為以下兩種類型之一:

  • Targeted Adversarial Attack

    有針對性的對抗攻擊
  • Untargeted Adversarial Attack

    無目標對抗攻擊
  • 有針對性的對抗攻擊 (Targeted Adversarial Attack)

    A targeted Adversarial Attack is an attack in which the aim of the perturbation is to cause the model to predict a specific wrong class.

    有針對性的對抗攻擊是一種攻擊,其中攝動的目的是使模型預測特定的錯誤類別。

    Source: 資料來源: anishathalye.comanishathalye.com

    The image on the left shows the original image was classified correctly as being a Tabby Cat. Now, as part of the Targeted Attack that was conducted, the attacker decided that he would like the image to be classified as a guacamole instead. Thus, the perturbation was created in such a manner that it would force the model to predict the perturbed image as a guacamole and nothing else (i.e. Guacamole was the target class).

    左圖顯示原始圖像被正確分類為虎斑貓 。 現在,作為實施的有針對性攻擊的一部分,攻擊者決定將其圖像分類為鱷梨調味醬 。 因此,以這樣一種方式來創建擾動,該擾動將迫使模型將擾動的圖像預測為鱷梨調味醬而不是其他(即鱷梨 調味醬是目標類別)。

    非目標對抗攻擊 (Untargeted Adversarial Attack)

    As opposed to a Targeted Attack, an Untargeted Adversarial Attack involves the generation of a perturbation that will cause the model to predict the image as something that it is not.However, the attacker doesn’t explicitly choose what he would like the wrong prediction to be.

    與有針對性的攻擊相反,無目標的對抗攻擊涉及到擾動的產生,這將導致模型將圖像預測為不是它的東西,但是攻擊者并未明確選擇他想要的錯誤預測是。

    An intuitive way to think about the difference between a Targeted Attack and an Untargeted Attack is that a Targeted Attack aims to generate a perturbation that will maximize the probability of some class other than the correct class which is chosen by the attacker (i.e. the target class) whereas an Untargeted Attack aims to generate a perturbation that will minimize the probability of the actual class to such an extent that the probability of some class other than the actual class becomes greater than the probability of the target class.

    考慮定向攻擊和非定向攻擊之間區別的一種直觀方法是,定向攻擊旨在產生一種擾動,該擾動將使攻擊者選擇的正確類別(即目標類別)以外的某些類別的概率最大化),而無目標攻擊的目的是產生一種擾動,以將實際類別的概率最小化到某種程度,使得除實際類別之外的某些類別的概率變得大于目標類別的概率。

    道德問題 (Ethical Concerns)

    As you read this, you might begin to think of some of the ethical concerns arising from Adversarial Examples, however, the true magnitude of these concerns only becomes apparent when we take a real-world example. We can take the development of self-driving cars as an example. Self driving cars tend to use some kind of deep learning framework to identify road signs and that helps the car perform actions on the basis of those signs. It turns out that by making actual minor alterations to the physical road signs, they too can serve as Adversarial Examples (it is possible to generate Adversarial Examples in the real world as well). In such a situation, one could modify a Stop sign in such a manner that cars would interpret it as a Turn Left sign, and this could have disastrous effects.

    在閱讀本文時,您可能會開始想到對抗性示例引起的一些道德關注,但是,只有當我們以實際示例為例時,這些關注的真實程度才變得顯而易見。 我們可以以無人駕駛汽車的發展為例。 自動駕駛汽車傾向于使用某種深度學習框架來識別道路標志,并幫助汽車根據這些標志執行動作。 事實證明,通過對實際路標進行實際的細微改動,它們也可以用作對抗示例(也可以在現實世界中生成對抗示例)。 在這種情況下,可以以某種方式修改“ 停車”標志,使汽車將其解釋為“ 左轉”標志,這可能會造成災難性的后果。

    An example of this can be seen in the image below, where a physical change made to the Stop sign causes it to be interpreted as a Speed Limit sign.

    下圖顯示了一個示例,其中對停止標志的物理更改導致將其解釋為限速標志。

    Source: 來源: BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain by T.Gu, S.Dolan-Gavitt & S.GargBadNets:T.Gu,S.Dolan-Gavitt和S.Garg識別機器學習模型供應鏈中的漏洞

    As a result of this, it is with good reason that people see some ethical concerns with the development of technologies like self-driving cars. While this should in no way serve as an impediment to the development of such technologies, it should make us wary of the vulnerabilities in Deep Learning models. We must ensure that further research is conducted to find ways to secure models against such attacks so that advanced technologies that make use of deep learning (like self-driving cars) become safe for use in production.

    結果,有充分的理由使人們看到諸如自動駕駛汽車之類的技術發展方面的一些道德問題。 盡管這絕不妨礙此類技術的發展,但它應該使我們警惕深度學習模型中的漏洞。 我們必須確保進行進一步的研究,以找到保護模型免受此類攻擊的方法,以使利用深度學習的先進技術(例如自動駕駛汽車)在生產中安全使用。

    翻譯自: https://medium.com/analytics-vidhya/what-are-adversarial-examples-e796b4b00d32

    xml格式是什么示例

    總結

    以上是生活随笔為你收集整理的xml格式是什么示例_什么是对抗示例?的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。