日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 人工智能 > ChatGpt >内容正文

ChatGpt

深度学习:又一次推动AI梦想(Marr理论、语义鸿沟、视觉神经网络、神经形态学)

發(fā)布時(shí)間:2023/12/31 ChatGpt 110 豆豆
生活随笔 收集整理的這篇文章主要介紹了 深度学习:又一次推动AI梦想(Marr理论、语义鸿沟、视觉神经网络、神经形态学) 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

??????? 幾乎每一次神經(jīng)網(wǎng)絡(luò)的再流行,都會(huì)出現(xiàn):推進(jìn)人工智能的夢(mèng)想之說。

前言:

Marr視覺分層理論

??????? Marr視覺分層理論(百度百科):理論框架主要由視覺所建立、保持、并予以解釋的三級(jí)表象結(jié)構(gòu)組成,這就是:

??????? a.基元圖(the primal sketch)—由于圖像的密度變化可能與物體邊界這類具體的物理性質(zhì)相對(duì)應(yīng),因此它主要描述圖像的密度變化及其局部幾何關(guān)系。 ??????? b. 2.5維圖(2.5 Dimensional sketch)—以觀察者為中心,描述可見表面的方位、輪廓、深度及其他性質(zhì)。 ??????? c. 3維模型(3D Model)—以物體為中心,是用來處理和識(shí)別物體的三維形狀表象。
語義溝壑 ????? Semantic gap(Wiki百科): CB IR 中的“語義鴻溝”就是:由于計(jì)算機(jī)獲取的圖像的視覺信息與用戶對(duì)圖像理解的語義信息的不一致性而導(dǎo)致的低層和高層檢索需求間的距離。??感知鴻溝(sensory gap),它是一種在現(xiàn)實(shí)世界的物體和該場(chǎng)景記錄下來的(計(jì)算上的)描述信息之間的鴻溝。語義鴻溝(semantic gap),它是由于所視覺數(shù)據(jù)中提煉出的信息與在特定場(chǎng)合下這些數(shù)據(jù)對(duì)用戶的解釋之間缺乏一致性。
視神經(jīng)網(wǎng)絡(luò)分層模型

??????? 人體生理學(xué)研究有幾百年的歷史,對(duì)于視覺神經(jīng)系統(tǒng)的研究,任然處于實(shí)驗(yàn)?zāi)M階段,并不能得到真正的阻斷實(shí)驗(yàn)。目前可得出的生理學(xué)研究,視神經(jīng)系統(tǒng)(百科)顯示出分層和稀疏特性。并從此能夠得到視覺神經(jīng)系統(tǒng)到語義描述系統(tǒng)(語義鴻溝)的映射。

????? 自此,深度網(wǎng)絡(luò)為解決語義鴻溝指出了一個(gè)方向,且CNN可以從直覺上模擬人的神經(jīng)系統(tǒng),深度學(xué)習(xí)的深度有了真正地意義。

????

(1):深度學(xué)習(xí):推動(dòng)人工智能夢(mèng)想

原文鏈接:http://www.csdn.net/article/2013-05-29/2815479

Key Word:淺層學(xué)習(xí),深度學(xué)習(xí);

淺層學(xué)習(xí):淺層模型有一個(gè)重要特點(diǎn),就是假設(shè)靠人工經(jīng)驗(yàn)來抽取樣本的特征,而強(qiáng)調(diào)模型主要是負(fù)責(zé)分類或預(yù)測(cè)。淺層模型:貌似只有一個(gè)隱含層的神經(jīng)網(wǎng)絡(luò)。在模型的運(yùn)用不出差錯(cuò)的前提下(如假設(shè)互聯(lián)網(wǎng)公司聘請(qǐng)的是機(jī)器學(xué)習(xí)的專家),特征的好壞就成為整個(gè)系統(tǒng)性能的瓶頸。這樣經(jīng)驗(yàn)就起了很重要的作用!

深度學(xué)習(xí):百度在線學(xué)習(xí)案例。

DNN與微軟同聲傳譯背后的故事:http://www.csdn.net/article/2013-06-09/2815737

????? “我們談到AI時(shí),意味著高度抽象,Deep Learning是抽象的一種方式,但它遠(yuǎn)不是全部。通過神經(jīng)網(wǎng)絡(luò)能夠識(shí)別動(dòng)物,并不意味就理解了世界,我甚至將其看做‘模式識(shí)別’而非‘智能’”,Seide這樣認(rèn)為:“‘深’對(duì)智能系統(tǒng)來說很重要,但它不是智能的全部。語音識(shí)別可以視為AI領(lǐng)域的一個(gè)縮影,DNN也只是語音識(shí)別技術(shù)中的一部分——若從代碼長(zhǎng)度的角度考量,它甚至只是全部技術(shù)中很小的一部分。”

???????? PS:這由讓我想起來 中文屋子? 的哲學(xué)討論

(2):機(jī)器學(xué)習(xí)前沿?zé)狳c(diǎn)–Deep Learning

???????? ? 機(jī)器學(xué)習(xí)前沿?zé)狳c(diǎn):http://elevencitys.com/?p=1854

???? ??? 原始鏈接:http://blog.sina.com.cn/s/blog_46d0a3930101fswl.html

???????? 自 2006 年以來,機(jī)器學(xué)習(xí)領(lǐng)域,取得了突破性的進(jìn)展。

??????? 圖靈試驗(yàn),至少不是那么可望而不可即了。至于技術(shù)手段,不僅僅依賴于云計(jì)算對(duì)大數(shù)據(jù)的并行處理能力,而且依賴于算法。這個(gè)算法就是,Deep Learning。借助于 Deep Learning 算法,人類終于找到了如何處理 “抽象概念”這個(gè)亙古難題的方法。

?????? 于是學(xué)界忙著延攬相關(guān)領(lǐng)域的大師。Alex Smola 加盟 CMU,就是這個(gè)背景下的插曲。懸念是 Geoffrey Hinton和 Yoshua Bengio 這兩位牛人,最后會(huì)加盟哪所大學(xué)。

Geoffrey Hinton 曾經(jīng)轉(zhuǎn)戰(zhàn) Cambridge、CMU,目前任教University of Toronto。相信挖他的名校一定不少。

Yoshua Bengio 經(jīng)歷比較簡(jiǎn)單,McGill University 獲得博士后,去 MIT 追隨 Mike Jordan 做博士后。目前任教 University of Montreal。

Deep Learning 引爆的這場(chǎng)革命,不僅學(xué)術(shù)意義巨大,而且離錢很近,實(shí)在太近了。如果把相關(guān)技術(shù)難題比喻成一座山,那么翻過這座山,山后就是特大露天金礦。技術(shù)難題解決以后,剩下的事情,就是動(dòng)用資本和商業(yè)的強(qiáng)力手段,跑馬圈地了。

???????? 于是各大公司重兵集結(jié),虎視眈眈。Google 兵分兩路,左路以 Jeff Dean 和 Andrew Ng 為首,重點(diǎn)突破 Deep Learning 等等算法和應(yīng)用 [3](Introduction to Deep Learning.? http://en.wikipedia.org/wiki/Deep_learning)。

(3):Neuromorphic Engineering- A Stepstone for Artificial Intelligence

?? ? ? ?? 神經(jīng)形態(tài)工程師的目標(biāo):http://elevencitys.com/?p=6265

??????? 這個(gè)全部黏貼了!

??????? 構(gòu)建類似人腦的三大特征的計(jì)算機(jī)是神經(jīng)形態(tài)工程師的目標(biāo)!(低功耗; 容錯(cuò)性; 自學(xué)習(xí))。人類大腦的功率:約20W,當(dāng)然這還只是TDP,平時(shí)消耗更低。容錯(cuò)性:并行處理,因此也意味著并非完備,而是一個(gè)概率模型。自學(xué)習(xí):這個(gè)屬于系統(tǒng)級(jí)別,包含整個(gè)感知-反饋-決策系統(tǒng),復(fù)雜度暫時(shí)沒辦法分析。

??????? Here I would like to introduce the progress of?Neuromorphic engineering(神經(jīng)形態(tài)工程學(xué)),?a branch of engineering built on electronic devices. The main goal of this subject is to emulate complex neuron network and ion channel dynamics in real time, using highly compact and power-efficient CMOS analog VLSI technology. Compared to traditional software-based computer modeling and simulation, this approach can be implemented in a extremely small size with low power requirement, when is used for large-scale and high speed simulation of neuron. This special feature provide possibility for the real computing applications, such asneuroprothesis, brain-machine interface, neurorobotics, machine learning and so on. [1]

???????

???????? A key aspect of neuromorphic engineering is understanding how the morphology of individual neurons, circuits and overall architectures creates desirable computations, affects how information is represented, influences robustness to damage, incorporates learning and development, adapts to local change (plasticity), and facilitates evolutionary change. Neuromorphic engineering is a new interdisciplinary discipline that takes inspiration from?biology,?physics,?mathematics,?computer science?and?engineering?to design artificial neural systems, such as?vision systems,?head-eye systems,?auditory processors, and autonomous robots, whose physical architecture and design principles are based on those of biological nervous systems.[2]

???????? Our human brain has three distinct feature, which are highly parallel processing. quick adaptability, ?and self-configuration. ?We ?have owned a deep understanding about the digital computers from the top to the bottom, from the operating system to the hardware design now. However, some analog computing, for example, voice recognition, learning etc. is not easy to implemented in the digit computers by now. In terms of the accuracy and power efficient, the mammal’s brain is so power and difficult to figure out. Since the artificial intelligence was pointed out in last century, we have invested lots of research effort in many areas, such as computer science, physiology, chemistry etc. to explain our brain. But it seems true that we know much more about the universe than the brain, it is sad, or promising? The only thing we are sure about, is that the brain do more than just information processing.

???????? Thus engineers began to ask for help from the biology perspective. But it is not so easy to emulate such a large scale computing machine, which owns about 85 billion neurons. Neuromorphic engineering is an important and promising branch to let us find the mystery of our brain. The feature of neuron computing is high parallelism, and adaptive learning, while bad at math. Same as the real CMOS technology, the placement of interconnect is a tricky job in Neuromorphic engineering. This engineering provides a potential to build the machine whose nature is learning.

???????

??????? DARPA SyNAPSE Program is an on-going project to build a?electronic?neuromorphic?machine technology that scales to biological levels. It has made several milestones since it was initialized from 2008.??It should recreate 10 billion?neurons, 100 trillion?synapses, consume one?kilowatt?(same as a small electric heater), and occupy less than two?liters?of space at last. [3]

??????? The initial phase of the SyNAPSE program developed nanometer scale electronic synaptic components capable of adapting the connection strength between two neurons in a manner analogous to that seen in biological systems(Hebbian learning), and simulated the utility of these synaptic components in core microcircuits that support the overall system architecture.

??????? Continuing efforts will focus on hardware development through the stages of microcircuit development, fabrication process development, single chip system development, and multi-chip system development.In support of these hardware developments, the program seeks to develop increasingly capable architecture and design tools, very large-scale computer simulations of the neuromorphic electronic systems to inform the designers and validate the hardware prior to fabrication, and virtual environments for training and testing the simulated and hardware neuromorphic systems. [4]

?????? To see more background:?http://homes.cs.washington.edu/~diorio/Talks/InvitedTalks/Telluride99/


Reference:

[1]?Rachmuth, Guy, et al. “A biophysically-based neuromorphic model of spike rate-and timing-dependent plasticity.”?Proceedings of the National Academy of Sciences?108.49 (2011): E1266-E1274.

[2]?http://en.wikipedia.org/wiki/Neuromorphic_engineering

[3]?http://www.artificialbrains.com/darpa-synapse-program

[4]?http://en.wikipedia.org/wiki/SyNAPSE


(4):最后呼吁:

?????? 不管怎樣都好,如果有一天,AI真的找到合適的程序構(gòu)建模型,多少人還是希望我們對(duì)這個(gè)模型的了解能超過我們對(duì)于自身的了解。黑箱意味著不可控制,必然導(dǎo)致無法預(yù)料的結(jié)果,這是所有從事科學(xué)職業(yè)的人是不想看到的。

?????? 付出多少就能得到多少,付出多少才能得到多少,一勞永逸意味著滅亡。


總結(jié)

以上是生活随笔為你收集整理的深度学习:又一次推动AI梦想(Marr理论、语义鸿沟、视觉神经网络、神经形态学)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。