神经网络实现xor_在神经网络中实现逻辑门和XOR解决方案
神經(jīng)網(wǎng)絡(luò)實(shí)現(xiàn)xor
Neural Networks have risen to prominence in recent years as one of the most powerful machine learning techniques (and over-used buzzwords) in tech. In this post I’ll give a beginner-friendly overview of how Neural Nets work and how they can be used to solve a simple but fundamental problem: representing logic gates. This blog post is based on the book “Neural Networks and Learning Machines” by Simon Haykin if anyone would like to explore the topic in more detail.
近年來(lái),神經(jīng)網(wǎng)絡(luò)作為技術(shù)中最強(qiáng)大的機(jī)器學(xué)習(xí)技術(shù)(和過(guò)度使用的流行詞)之一而受到重視。 在這篇文章中,我將對(duì)神經(jīng)網(wǎng)絡(luò)的工作原理以及如何將它們用于解決一個(gè)簡(jiǎn)單但基本的問(wèn)題(代表邏輯門(mén))進(jìn)行初學(xué)者友好的概述。 如果有人想更詳細(xì)地探討該主題,則本博客文章基于Simon Haykin的《神經(jīng)網(wǎng)絡(luò)和學(xué)習(xí)機(jī)器》一書(shū)。
Multi Layer Perceptrons (MLPs) are perhaps the most commonly used form of Neural Net. Standard MLP architecture is feedforward in that activation flows one way, from input to output. An MLP architecture can be broken down in to 3 sections: input layers, hidden layers and output layers. Input layers take data into the network, hidden layers perform processing and output layers send out the result.
多層感知器(MLP)可能是神經(jīng)網(wǎng)絡(luò)最常用的形式。 標(biāo)準(zhǔn)MLP體系結(jié)構(gòu)是前饋的,其激活是一種從輸入到輸出的流動(dòng)方式。 MLP體系結(jié)構(gòu)可以分為三部分:輸入層,隱藏層和輸出層。 輸入層將數(shù)據(jù)帶入網(wǎng)絡(luò),隱藏層進(jìn)行處理,輸出層發(fā)送結(jié)果。
Perceptrons (like all Neural Nets) are composed of units, which represent neurons in the human brain (on which neural nets are based). A unit takes one or more inputs, sums the inputs together and if the summed value is greater than the value specified in the activation function passes data on. The activation in the perceptrons shown below is 0. A bias unit is another form of unit which always activates, typically sending a 1 to all units to which it is connected. The diagram below shows Perceptron implementations for both AND and OR logic gates.
感知器(像所有神經(jīng)網(wǎng)絡(luò)一樣)由單位組成,這些單位代表人腦中的神經(jīng)元(神經(jīng)網(wǎng)絡(luò)所基于)。 一個(gè)單元接收一個(gè)或多個(gè)輸入,將輸入加在一起,如果總和大于激活函數(shù)中指定的值,則繼續(xù)傳遞數(shù)據(jù)。 下圖所示的感知器中的激活為0。偏置單元是始終激活的另一種形式的單元,通常向與其連接的所有單元發(fā)送1。 下圖顯示了AND和OR邏輯門(mén)的Perceptron實(shí)現(xiàn)。
AND and OR logic gatesAND和OR邏輯門(mén)A major issue in the early days of neural net development was representing the XOR function (the logic for which is shown in the table below). However a simple Perceptron with only input and output layers (as used for AND and OR above) could only seperate data with a straight line while XOR is not linearly seperable.
神經(jīng)網(wǎng)絡(luò)開(kāi)發(fā)的早期主要問(wèn)題是表示XOR函數(shù)(其邏輯在下表中顯示)。 但是,只有輸入和輸出層的簡(jiǎn)單Perceptron(用于上面的AND和OR)只能用直線分隔數(shù)據(jù),而XOR不能線性分隔。
However a workaround was found when it was shown that x1 XOR x2 = (x1 OR x2) AND (NOT(x1 AND x2)). With this knowledge the Perceptron shown below can be developed to represent XOR
但是,當(dāng)發(fā)現(xiàn)x1 XOR x2 =(x1 OR x2)AND(NOT(x1 AND x2))時(shí),找到了一種解決方法。 有了這些知識(shí),可以開(kāi)發(fā)出以下所示的Perceptron來(lái)表示XOR
The table below shows that the XOR perceptron returns the correct output for each of the 4 possible input values
下表顯示XOR感知器針對(duì)4個(gè)可能的輸入值中的每個(gè)返回正確的輸出
In conclusion this blog has covered a few simple uses for Perceptrons. Perceptrons, especially MLPs are a very powerful technique with a wide variety of applications.
總之,此博客介紹了Perceptron的一些簡(jiǎn)單用法。 感知器,尤其是MLP是一種非常強(qiáng)大的技術(shù),具有廣泛的應(yīng)用程序。
翻譯自: https://medium.com/analytics-vidhya/implementing-logic-gates-in-neural-nets-and-a-solution-for-xor-ebf68cf8109b
神經(jīng)網(wǎng)絡(luò)實(shí)現(xiàn)xor
總結(jié)
以上是生活随笔為你收集整理的神经网络实现xor_在神经网络中实现逻辑门和XOR解决方案的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 外媒称谷歌无限福利时代已过,大裁员中27
- 下一篇: sagan 自注意力_请使用英语:自我注