日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人文社科 > 生活经验 >内容正文

生活经验

激活函数之ReLU/softplus介绍及C++实现

發布時間:2023/11/27 生活经验 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 激活函数之ReLU/softplus介绍及C++实现 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

softplus函數(softplus function):ζ(x)=ln(1+exp(x)).

softplus函數可以用來產生正態分布的β和σ參數,因為它的范圍是(0,∞)。當處理包含sigmoid函數的表達式時它也經常出現。softplus函數名字來源于它是另外一個函數的平滑(或”軟化”)形式,這個函數是x+=max(0,x)。softplus 是對 ReLU 的平滑逼近的解析函數形式。

softplus函數別設計成正部函數(positive part function)的平滑版本,這個正部函數是指x+=max{0,x}。與正部函數相對的是負部函數(negative part function)x-=max{0, -x}。為了獲得類似負部函數的一個平滑函數,我們可以使用ζ(-x)。就像x可以用它的正部和負部通過等式x+-x-=x恢復一樣,我們也可以用同樣的方式對ζ(x)和ζ(-x)進行操作,就像下式中那樣:ζ(x) -ζ(-x)=x.?

Rectifier:In the context of artificial neural networks, the rectifier is an activation function defined as:

f(x)=max(0,x)

where x is the input to a neuron. This activation function was first introduced to a dynamical network by Hahnloser et al. in a 2000 paper in Nature. It has been used in convolutional networks more effectively than the widely used logistic sigmoid (which is inspired by probability theory; see logistic regression) and its more practical counterpart, the hyperbolic tangent. The rectifier is, as of 2015, the most popular activation function for deep neural networks.

A unit employing the rectifier is also called a rectified linear unit (ReLU).

A smooth approximation to the rectifier is the analytic function: f(x)=ln(1+ex), which is called the softplus function. The derivative of softplus is: f’(x)=ex/(ex+1)=1/(1+e-x), i.e. the logistic function.

Rectified linear units(ReLU) find applications in computer vision and speech recognition ?using deep neural nets.

Noisy ReLUs: Rectified linear units can be extended to include Gaussian noise, making them noisy ReLUs, giving: f(x)=max(0, x+Y), with Y∽N(0, σ(x)). Noisy ReLUs have been used with some success in restricted Boltzmann machines for computer vision tasks.

Leaky ReLUs:allow a small, non-zero gradient when the unit is not active:

Parametric ReLUs take this idea further by making the coefficient of leakage into a parameter that is learned along with the other neural network parameters:

Note that for a≤1, this is equivalent to: f(x)=max(x, ax), and thus has a relation to "maxout" networks.

ELUs:Exponential linear units try to make the mean activations closer to zero which speeds up learning. It has been shown that ELUs can obtain higher classification accuracy than ReLUs:

a is a hyper-parameter to be tuned and a≥0 is a constraint.

以上內容摘自:?《深度學習中文版》和?維基百科

以下是C++測試code:

#include "funset.hpp"
#include <math.h>
#include <iostream>
#include <string>
#include <vector>
#include <opencv2/opencv.hpp>
#include "common.hpp"// ========================= Activation Function: ELUs ========================
template<typename _Tp>
int activation_function_ELUs(const _Tp* src, _Tp* dst, int length, _Tp a = 1.)
{if (a < 0) {fprintf(stderr, "a is a hyper-parameter to be tuned and a>=0 is a constraint\n");return -1;}for (int i = 0; i < length; ++i) {dst[i] = src[i] >= (_Tp)0. ? src[i] : (a * (exp(src[i]) - (_Tp)1.));}return 0;
}// ========================= Activation Function: Leaky_ReLUs =================
template<typename _Tp>
int activation_function_Leaky_ReLUs(const _Tp* src, _Tp* dst, int length)
{for (int i = 0; i < length; ++i) {dst[i] = src[i] > (_Tp)0. ? src[i] : (_Tp)0.01 * src[i];}return 0;
}// ========================= Activation Function: ReLU =======================
template<typename _Tp>
int activation_function_ReLU(const _Tp* src, _Tp* dst, int length)
{for (int i = 0; i < length; ++i) {dst[i] = std::max((_Tp)0., src[i]);}return 0;
}// ========================= Activation Function: softplus ===================
template<typename _Tp>
int activation_function_softplus(const _Tp* src, _Tp* dst, int length)
{for (int i = 0; i < length; ++i) {dst[i] = log((_Tp)1. + exp(src[i]));}return 0;
}int test_activation_function()
{std::vector<double> src{ 1.23f, 4.14f, -3.23f, -1.23f, 5.21f, 0.234f, -0.78f, 6.23f };int length = src.size();std::vector<double> dst(length);fprintf(stderr, "source vector: \n");fbc::print_matrix(src);fprintf(stderr, "calculate activation function:\n");fprintf(stderr, "type: sigmoid result: \n");fbc::activation_function_sigmoid(src.data(), dst.data(), length);fbc::print_matrix(dst);fprintf(stderr, "type: sigmoid fast result: \n");fbc::activation_function_sigmoid_fast(src.data(), dst.data(), length);fbc::print_matrix(dst);fprintf(stderr, "type: softplus result: \n");fbc::activation_function_softplus(src.data(), dst.data(), length);fbc::print_matrix(dst);fprintf(stderr, "type: ReLU result: \n");fbc::activation_function_ReLU(src.data(), dst.data(), length);fbc::print_matrix(dst);fprintf(stderr, "type: Leaky ReLUs result: \n");fbc::activation_function_Leaky_ReLUs(src.data(), dst.data(), length);fbc::print_matrix(dst);fprintf(stderr, "type: Leaky ELUs result: \n");fbc::activation_function_ELUs(src.data(), dst.data(), length);fbc::print_matrix(dst);return 0;
}

GitHub: https://github.com/fengbingchun/NN_Test

總結

以上是生活随笔為你收集整理的激活函数之ReLU/softplus介绍及C++实现的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。