日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

概率编程编程_概率编程语言的温和介绍

發(fā)布時間:2023/12/15 编程问答 35 豆豆
生活随笔 收集整理的這篇文章主要介紹了 概率编程编程_概率编程语言的温和介绍 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

概率編程編程

I recently started a new newsletter focus on AI education. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:

我最近開始了一份有關(guān)AI教育的新時事通訊。 TheSequence是無BS(意味著沒有炒作,沒有新聞等),它是專注于AI的新聞通訊,需要5分鐘的閱讀時間。 目標(biāo)是讓您了解機(jī)器學(xué)習(xí)項目,研究論文和概念的最新動態(tài)。 請通過以下訂閱嘗試一下:

Probabilistic thinking is an incredibly valuable tool for decision making. From economists to poker players, people that can think in terms of probabilities tend to make better decisions when faced with uncertain situations. The fields of probabilities and game theory have been established for centuries and decades but are not experiencing a renaissance with the rapid evolution of artificial intelligence(AI). Can we incorporate probabilities as a first class citizen of software code? Welcome to the world of probabilistic programming languages(PPLs)

概率思維是決策中極為寶貴的工具。 從經(jīng)濟(jì)學(xué)家到撲克玩家,那些會考慮概率的人在遇到不確定的情況時往往會做出更好的決策。 概率和博弈論領(lǐng)域已經(jīng)建立了幾個世紀(jì)和幾十年,但隨著人工智能(AI)的快速發(fā)展,它并沒有經(jīng)歷復(fù)興。 我們可以將概率作為軟件代碼的一等公民納入其中嗎? 歡迎來到概率編程語言(PPL)的世界

The use of statistics to overcome uncertainty is one of the pillars of a large segment of the machine learning market. Probabilistic reasoning has long been considered one of the foundations of inference algorithms and is represented is all major machine learning frameworks and platforms. Recently, probabilistic reasoning has seen major adoption within tech giants like Uber, Facebook or Microsoft helping to push the research and technological agenda in the space. Specifically, PPLs have become one of the most active areas of development in machine learning sparking the release of some new and exciting technologies.

使用統(tǒng)計數(shù)據(jù)克服不確定性是機(jī)器學(xué)習(xí)市場很大一部分的Struts之一。 長期以來,概率推理一直被認(rèn)為是推理算法的基礎(chǔ)之一,并且代表了所有主要的機(jī)器學(xué)習(xí)框架和平臺。 最近,概率推理已在Uber,Facebook或Microsoft等技術(shù)巨頭中得到廣泛采用,有助于推動該領(lǐng)域的研究和技術(shù)議程。 具體地說,PPL成為機(jī)器學(xué)習(xí)中最活躍的發(fā)展領(lǐng)域之一,從而激發(fā)了一些令人興奮的新技術(shù)的發(fā)布。

什么是概率編程語言? (What are Probabilistic Programming Languages?)

Conceptually, probabilistic programming languages(PPLs) are domain-specific languages that describe probabilistic models and the mechanics to perform inference in those models. The magic of PPL relies on combining the inference capabilities of probabilistic methods with the representational power of programming languages.

從概念上講,概率編程語言(PPL)是領(lǐng)域特定的語言,描述了概率模型以及在這些模型中進(jìn)行推理的機(jī)制。 PPL的魔力在于將概率方法的推理能力與編程語言的表示能力相結(jié)合。

In a PPL program, assumptions are encoded with prior distributions over the variables of the model. During execution, a PPL program will launch an inference procedure to automatically compute the posterior distributions of the parameters of the model based on observed data. In other words, inference adjusts the prior distribution using the observed data to give a more precise mode. The output of a PPL program is a probability distribution, which allows the programmer to explicitly visualize and manipulate the uncertainty associated with a result.

在PPL程序中,假設(shè)使用模型變量的先驗分布進(jìn)行編碼。 在執(zhí)行期間,PPL程序?qū)右粋€推理過程,以根據(jù)觀察到的數(shù)據(jù)自動計算模型參數(shù)的后驗分布。 換句話說,推論使用觀察到的數(shù)據(jù)來調(diào)整先驗分布以給出更精確的模式。 PPL程序的輸出是概率分布,它使程序員可以顯式可視化和操縱與結(jié)果相關(guān)的不確定性。

To illustrate the simplicity of PPLs, let’s use one of the most famous problems of modern statistics: a biased coin toss. The idea of this problem is to calculate the bias of a coin. Let’s assume that xi = 1 if the result of the i-th coin toss is head and xi = 0 if it is tail. Our context assumes that individual coin tosses are independent and identically distributed (IID) and that each toss follows a Bernoulli distribution with parameter θ: p(xi = 1 | θ) = θ and p(xi = 0 | θ) = 1 ? θ. The latent (i.e., unobserved) variable θ is the bias of the coin. The task is to infer θ given the results of previously observed coin tosses, that is, p(θ | x1, x2, . . . , xN ).

為了說明PPL的簡單性,讓我們使用現(xiàn)代統(tǒng)計中最著名的問題之一:偏向拋硬幣。 這個問題的想法是計算硬幣的偏差。 假設(shè)第i次拋硬幣的結(jié)果為正面時xi = 1,如果為尾部則xi = 0。 我們的上下文假設(shè)單個拋硬幣是獨立且均勻分布的(IID),并且每個拋硬幣都遵循具有參數(shù)θ的伯努利分布:p(xi = 1 |θ)=θ和p(xi = 0 |θ)= 1-θ 。 潛變量(即未觀察到的變量)是硬幣的偏差。 任務(wù)是根據(jù)先前觀察到的拋硬幣的結(jié)果推論θ,即p(θ| x1,x2,...,xN)。

Modeling a simple program like the biased coin toss in a general-purpose programing language can result on hundreds of lines of code. However, PPLs like Edward express this problem in a few simple likes of code:

用通用編程語言對像有偏的拋硬幣之類的簡單程序進(jìn)行建模可能會產(chǎn)生數(shù)百行代碼。 但是,像Edward這樣的PPL用一些簡單的代碼來表達(dá)這個問題:

# Model
theta = Uniform(0.0, 1.0)
x = Bernoulli(probs=theta, sample_shape=10)
Data 5 data = np.array([0, 1, 0, 0, 0, 0, 0, 0, 0, 1])
Inference
qtheta = Empirical( 8 tf.Variable(tf.ones(1000) ? 0.5))
inference = ed.HMC({theta: qtheta},
data={x: data})
inference.run()
Results 13 mean, stddev = ed.get_session().run( [qtheta.mean(),qtheta.stddev()])
print("Posterior mean:", mean)
print("Posterior stddev:", stddev)

圣杯:深入的PPL (The Holy Grail: Deep PPL)

For decades, the machine learning space was divided in two irreconcilable camps: statistics and neural networks. One camp gave birth to probabilistic programming while the other was behind transformational movements such as deep learning. Recently, the two schools of thought have come together to combine deep learning and Bayesian modeling into single programs. The ultimate expression of this effort is deep probabilistic programming languages(Deep PPLs).

幾十年來,機(jī)器學(xué)習(xí)空間被劃分為兩個不可調(diào)和的陣營:統(tǒng)計和神經(jīng)網(wǎng)絡(luò)。 一個陣營催生了概率編程,而另一個陣營則產(chǎn)生了諸如深度學(xué)習(xí)之類的變革性運動。 最近,這兩個思想流派聚集在一起,將深度學(xué)習(xí)和貝葉斯建模結(jié)合到單個程序中。 這種努力的最終表達(dá)是深度概率編程語言(Deep PPL)。

Conceptually, Deep PPLs can express Bayesian neural networks with probabilistic weights and biases. Practically speaking, Deep PPLs have materialized as new probabilistic languages and libraries that integrate seamlessly with popular deep learning frameworks.

從概念上講,深度PPL可以表達(dá)具有概率權(quán)重和偏差的貝葉斯神經(jīng)網(wǎng)絡(luò)。 實際上,深度PPL已實現(xiàn)為與流行的深度學(xué)習(xí)框架無縫集成的新概率語言和庫。

您需要了解的3個深層PPL (3 Deep PPLs You Need to Know About)

The field of probabilistic programming languages(PPLs) have been exploding with research and innovation in recent years. Most of that innovations have come from combining PPLs and deep learning methods to build neural networks that can efficiently handle uncertainty. Tech giants such as Google, Microsoft or Uber have been responsible for pushing the boundaries of Deep PPLs into large scale scenarios. Those efforts have translated into completely new Deep PPLs stacks that are becoming increasingly popular within the machine learning community. Let’s explore some of the most recent advancements in the Deep PPL space.

近年來,概率編程語言(PPL)領(lǐng)域一直在研究和創(chuàng)新中發(fā)展。 大多數(shù)創(chuàng)新來自將PPL和深度學(xué)習(xí)方法相結(jié)合,以構(gòu)建可以有效處理不確定性的神經(jīng)網(wǎng)絡(luò)。 諸如Google,Microsoft或Uber之類的技術(shù)巨頭一直負(fù)責(zé)將Deep PPL的邊界推向大規(guī)模方案。 這些努力已經(jīng)轉(zhuǎn)化為全新的Deep PPL堆棧,這些堆棧在機(jī)器學(xué)習(xí)社區(qū)中越來越受歡迎。 讓我們探索Deep PPL空間中的一些最新進(jìn)展。

愛德華 (Edward)

Edward is a Turing-complete probabilistic programming language(PPL) written in Python. Edward was originally championed by the Google Brain team but now has an extensive list of contributors. The original research paper of Edward was published in March 2017 and since then the stack has seen a lot of adoption within the machine learning community. Edward fuses three fields: Bayesian statistics and machine learning, deep learning, and probabilistic programming. The library integrates seamlessly with deep learning frameworks such as Keras and TensorFlow.

Edward是一種用Python編寫的圖靈完備的概率編程語言(PPL)。 Edward最初是Google Brain團(tuán)隊的擁護(hù)者,但現(xiàn)在有大量的貢獻(xiàn)者 。 愛德華(Edward )的原始研究論文于2017年3月發(fā)表,從那時起,該堆棧在機(jī)器學(xué)習(xí)社區(qū)中得到了廣泛采用。 愛德華融合了三個領(lǐng)域:貝葉斯統(tǒng)計和機(jī)器學(xué)習(xí),深度學(xué)習(xí)和概率編程。 該庫與Keras和TensorFlow等深度學(xué)習(xí)框架無縫集成。

1 # Model
2 theta = Uniform(0.0, 1.0)
3 x = Bernoulli(probs=theta, sample_shape=10)
4 # Data
5 data = np.array([0, 1, 0, 0, 0, 0, 0, 0, 0, 1])
6 # Inference
7 qtheta = Empirical(
8 tf.Variable(tf.ones(1000) ? 0.5))
9 inference = ed.HMC({theta: qtheta},
10 data={x: data})
11 inference.run()
12 # Results
13 mean, stddev = ed.get_session().run(
14 [qtheta.mean(),qtheta.stddev()])
15 print("Posterior mean:", mean)
16 print("Posterior stddev:", stddev)
1 # Inference Guide
2 qalpha = tf.Variable(1.0)
3 qbeta = tf.Variable(1.0)
4 qtheta = Beta(qalpha, qbeta)
5 # Inference
6 inference = ed.KLqp({theta: qtheta}, {x: data})
7 inference.run()

火焰兵 (Pyro)

Pyro is a deep probabilistic programming language(PPL) released by Uber AI Labs. Pyro is built on top of PyTorch and is based on four fundamental principles:

Pyro是由Uber AI Labs發(fā)布的一種深度概率編程語言(PPL)。 Pyro建立在PyTorch之上,并基于以下四個基本原則:

  • Universal: Pyro is a universal PPL — it can represent any computable probability distribution. How? By starting from a universal language with iteration and recursion (arbitrary Python code), and then adding random sampling, observation, and inference.

    通用 :Pyro是通用PPL-它可以表示任何可計算的概率分布。 怎么樣? 從具有迭代和遞歸的通用語言(任意Python代碼)開始,然后添加隨機(jī)采樣,觀察和推斷。

  • Scalable: Pyro scales to large data sets with little overhead above hand-written code. How? By building modern black box optimization techniques, which use mini-batches of data, to approximate inference.

    可擴(kuò)展 :Pyro可以擴(kuò)展到大型數(shù)據(jù)集,而手寫代碼的開銷卻很小。 怎么樣? 通過構(gòu)建使用小批數(shù)據(jù)的現(xiàn)代黑盒優(yōu)化技術(shù)來近似推斷。

  • Minimal: Pyro is agile and maintainable. How? Pyro is implemented with a small core of powerful, composable abstractions. Wherever possible, the heavy lifting is delegated to PyTorch and other libraries.

    最小 :Pyro是敏捷且可維護(hù)的。 怎么樣? Pyro是由強(qiáng)大的可組合抽象的一小部分實現(xiàn)的。 盡可能將繁重的工作委托給PyTorch和其他庫。

  • Flexible: Pyro aims for automation when you want it and control when you need it. How? Pyro uses high-level abstractions to express generative and inference models, while allowing experts to easily customize inference.

    靈活 :Pyro的目標(biāo)是在需要時實現(xiàn)自動化,并在需要時進(jìn)行控制。 怎么樣? Pyro使用高級抽象來表示生成模型和推理模型,同時使專家可以輕松自定義推理。

Just as other PPLs, Pyro combines deep learning models and statistical inference using a simple syntax as illustrated in the following code:

與其他PPL一樣,Pyro使用簡單的語法將深度學(xué)習(xí)模型和統(tǒng)計推斷相結(jié)合,如以下代碼所示:

1 # Model
2 def coin():
3 theta = pyro.sample("theta", Uniform(
4 Variable(torch.Tensor([0])),
5 Variable(torch.Tensor([1])))
6 pyro.sample("x", Bernoulli(
7 theta ? Variable(torch.ones(10)))
8 # Data
9 data = {"x": Variable(torch.Tensor(
10 [0, 1, 0, 0, 0, 0, 0, 0, 0, 1]))}
11 # Inference
12 cond = pyro.condition(coin, data=data)
13 sampler = pyro.infer.Importance(cond,
14 num_samples=1000)
15 post = pyro.infer.Marginal(sampler, sites=["theta"])
16 # Result
17 samples = [post()["theta"].data[0] for _ in range(1000)]
18 print("Posterior mean:", np.mean(samples))
19 print("Posterior stddev:", np.std(samples))# Inference Guide
2 def guide():
3 qalpha = pyro.param("qalpha", Variable(torch.Tensor([1.0]), requires_grad=True))
4 qbeta = pyro.param("qbeta", Variable(torch.Tensor([1.0]), requires_grad=True))
5 pyro.sample("theta", Beta(qalpha, qbeta))
6 # Inference
7 svi = SVI(cond, guide, Adam({}), loss="ELBO", num_particles=7)
8 for step in range(1000):
9 svi.step()

推斷網(wǎng) (Infer.Net)

Microsoft recently open sourced Infer.Net a framework that simplifies probabilistic programming for .Net developers. Microsoft Research has been working on Infer.Net since 2004 but it has been only recently, with the emergence of deep learning, that the framework has become really popular. Infer.Net provides some strong differentiators that makes it a strong choice for developers venturing into the Deep PPL space:

微軟最近開放了Infer.Net的源代碼,該框架簡化了.Net開發(fā)人員的概率編程。 自2004年以來,Microsoft Research一直在研究Infer.Net,但是直到最近,隨著深度學(xué)習(xí)的出現(xiàn),該框架才真正流行起來。 Infer.Net提供了一些強(qiáng)大的優(yōu)勢,這使其成為進(jìn)入Deep PPL空間的開發(fā)人員的強(qiáng)大選擇:

  • Rich modelling language” Support for univariate and multivariate variables, both continuous and discrete. Models can be constructed from a broad range of factors including arithmetic operations, linear algebra, range and positivity constraints, Boolean operators, Dirichlet-Discrete, Gaussian, and many others.

    豐富的建模語言 ”支持連續(xù)和離散的單變量和多變量。 可以從多種因素構(gòu)建模型,包括算術(shù)運算,線性代數(shù),范圍和正性約束,布爾運算符,Dirichlet-Discrete,高斯等。

  • Multiple inference algorithms” Built-in algorithms include Expectation Propagation, Belief Propagation (a special case of EP), Variational Message Passing and Gibbs sampling.

    多個推理算法 ”內(nèi)置算法包括期望傳播,置信傳播(EP的特殊情況),變消息傳遞和Gibbs抽樣。

  • Designed for large scale inference: Infer.NET compiles models into inference source code which can be executed independently with no overhead. It can also be integrated directly into your application.

    專為大規(guī)模推理而設(shè)計 :Infer.NET將模型編譯成推理源代碼,這些代碼可以獨立執(zhí)行而不會產(chǎn)生開銷。 它也可以直接集成到您的應(yīng)用程序中。

  • User-extendable: Probability distributions, factors, message operations and inference algorithms can all be added by the user. Infer.NET uses a plug-in architecture which makes it open-ended and adaptable.

    用戶可擴(kuò)展用戶可以添加概率分布,因子,消息操作和推理算法。 Infer.NET使用一種插件架構(gòu),使其具有開放性和適應(yīng)性。

Lets look at our coin toss example in Infer.Net

讓我們看看Infer.Net中的拋硬幣示例

Variable<bool> firstCoin = Variable.Bernoulli(0.5);
Variable<bool> secondCoin = Variable.Bernoulli(0.5);
Variable<bool> bothHeads = firstCoin & secondCoin;
InferenceEngine engine = new InferenceEngine();
Console.WriteLine("Probability both coins are heads: "+engine.Infer(bothHeads));

The field of Deep PPL has is steadily becoming an important foundational block of the machine learning ecosystem. Pyro, Edward and Infer.Net are just three recent examples of Deep PPLs but not the only relevant ones. The intersection of deep learning frameworks and PPL offers an incredible large footprint for innovation and new use cases are likely to push the boundaries of Deep PPLs in the near future.

深度PPL領(lǐng)域已穩(wěn)步成為機(jī)器學(xué)習(xí)生態(tài)系統(tǒng)的重要基礎(chǔ)塊。 Pyro,Edward和Infer.Net只是Deep PPL的三個最新示例,但不是唯一相關(guān)的示例。 深度學(xué)習(xí)框架與PPL的交集為創(chuàng)新提供了難以置信的龐大資源,新用例可能會在不久的將來推動Deep PPL的界限。

翻譯自: https://medium.com/swlh/a-gentle-introduction-to-probabilistic-programming-languages-bf1e19042ab6

概率編程編程

總結(jié)

以上是生活随笔為你收集整理的概率编程编程_概率编程语言的温和介绍的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 欧美亚洲一区二区在线观看 | 国产h片在线观看 | www.婷婷.com| 成人短视频在线 | 五月丁香花| 精品少妇v888av | 综合网中文字幕 | 久久久久无码精品国产 | av成人免费在线 | 99久久精品国产毛片 | 私人av | 日韩中文字幕亚洲精品欧美 | 四房婷婷| 青春草视频在线免费观看 | 2019中文字幕在线视频 | 日本一区二区三区免费电影 | 日韩第一页在线 | 女生鸡鸡软件 | 国产深喉视频一区二区 | 日本一二区视频 | 草草色 | 美女扒开尿口给男人桶 | 久久大| 樱花动漫无圣光 | 高跟肉丝丝袜呻吟啪啪网站av | 女性毛片 | 国产素人在线观看 | 6080av| 久久国产二区 | 噜噜噜久久,亚洲精品国产品 | 久久精品一区二区三 | 永久免费视频网站直接看 | 免费观看毛片网站 | 快色视频在线观看 | 国产片黄色| 国产主播一区二区 | 另类男人与善交video | 成人第一页| 国产女人18毛片水18精品 | 国产精品久久久久无码av | 奇米第四色在线 | 久草视频免费在线观看 | 免费一区视频 | 精品视频免费播放 | 精产国品一区二区三区 | 激情午夜av | 欧美一级黄色片网站 | 影音先锋啪啪 | www在线| 色88久久久久高潮综合影院 | 91在线观看网站 | h视频免费在线观看 | 好吊妞视频一区二区三区 | 亚洲国产无码久久 | 麻豆视屏 | 欧美成人三级在线播放 | 在线不卡日本 | 高清一区二区三区四区 | 亚洲国产精品成人 | 伊人精品综合 | 欧美精品免费在线 | 深田咏美av在线 | 天天插夜夜 | 精品午夜福利视频 | 夜夜夜久久久 | 啪啪网站大全 | 日韩欧美视频免费观看 | 国产午夜精品久久久久久久 | 中国女人黄色大片 | 日本在线播放视频 | 长篇高h肉爽文丝袜 | av在线.com| 99爱99| a v视频在线观看 | 亚洲午夜网 | www.av日韩 | 亚洲区一区二区三区 | 影音先锋丝袜制服 | 欧美人体视频一区二区三区 | 国产精品久久久久久在线观看 | 国产免费激情视频 | 国产在线拍揄自揄拍无码 | 精品国产一区二区三区久久 | 成人国产a | 最好看的2019中文大全在线观看 | 亚洲性猛交富婆 | av一区二区不卡 | 免费www xxx| 97精品免费视频 | 欧美精品做受xxx性少妇 | 国产乱子伦精品 | 狠狠爱五月婷婷 | 国产亚洲精品精品国产亚洲综合 | 欧美影视一区 | 四虎影院色 | 日韩精品一区二区三区视频 | 国产999精品视频 | 天天色天 | 91国产一区二区 |