日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

torch.nn.Linear详解

發(fā)布時間:2024/3/12 编程问答 37 豆豆
生活随笔 收集整理的這篇文章主要介紹了 torch.nn.Linear详解 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

在學(xué)習(xí)transformer時,遇到過非常頻繁的nn.Linear()函數(shù),這里對nn.Linear進(jìn)行一個詳解。
參考:https://pytorch.org/docs/stable/_modules/torch/nn/modules/linear.html

概要

    • 1. nn.Linear的原理:
    • 2. nn.Linear的使用:
    • 3. nn.Linear的源碼定義:
    • 4. nn.Linear的官方源碼:

1. nn.Linear的原理:

從名稱就可以看出來,nn.Linear表示的是線性變換,原型就是初級數(shù)學(xué)里學(xué)到的線性函數(shù):y=kx+b
不過在深度學(xué)習(xí)中,變量都是多維張量,乘法就是矩陣乘法,加法就是矩陣加法,因此nn.Linear()運(yùn)行的真正的計(jì)算就是:

output = weight @ input + bias

@: 在python中代表矩陣乘法

input: 表示輸入的Tensor,可以有多個維度

weights: 表示可學(xué)習(xí)的權(quán)重,shape=(output_feature,in_feature)

bias: 表示科學(xué)習(xí)的偏置,shape=(output_feature)

in_feature: nn.Linear 初始化的第一個參數(shù),即輸入Tensor最后一維的通道數(shù)

out_feature: nn.Linear 初始化的第二個參數(shù),即返回Tensor最后一維的通道數(shù)

output: 表示輸入的Tensor,可以有多個維度

2. nn.Linear的使用:

常用頭文件:import torch.nn as nn

nn.Linear()的初始化:

nn.Linear(in_feature,out_feature,bias)

in_feature: int型, 在forward中輸入Tensor最后一維的通道數(shù)

out_feature: int型, 在forward中輸出Tensor最后一維的通道數(shù)

bias: bool型, Linear線性變換中是否添加bias偏置

nn.Linear()的執(zhí)行:(即執(zhí)行forward函數(shù))

out=nn.Linear(input)

input: 表示輸入的Tensor,可以有多個維度
output: 表示輸入的Tensor,可以有多個維度

舉例:

2維 Tensor

m = nn.Linear(20, 40) input = torch.randn(128, 20) output = m(input) print(output.size()) # [(128,40])

4維 Tensor:

m = nn.Linear(128, 64) input = torch.randn(512, 3,128,128) output = m(input) print(output.size()) # [(512, 3,128,64))

3. nn.Linear的源碼定義:

import math import torch import torch.nn as nn from torch import Tensor from torch.nn.parameter import Parameter, UninitializedParameter from torch.nn import functional as F from torch.nn import init # from .lazy import LazyModuleMixinclass myLinear(nn.Module):r"""Applies a linear transformation to the incoming data: :math:`y = xA^T + b`This module supports :ref:`TensorFloat32<tf32_on_ampere>`.Args:in_features: size of each input sampleout_features: size of each output samplebias: If set to ``False``, the layer will not learn an additive bias.Default: ``True``Shape:- Input: :math:`(*, H_{in})` where :math:`*` means any number ofdimensions including none and :math:`H_{in} = \text{in\_features}`.- Output: :math:`(*, H_{out})` where all but the last dimensionare the same shape as the input and :math:`H_{out} = \text{out\_features}`.Attributes:weight: the learnable weights of the module of shape:math:`(\text{out\_features}, \text{in\_features})`. The values areinitialized from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})`, where:math:`k = \frac{1}{\text{in\_features}}`bias: the learnable bias of the module of shape :math:`(\text{out\_features})`.If :attr:`bias` is ``True``, the values are initialized from:math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})` where:math:`k = \frac{1}{\text{in\_features}}`Examples::>>> m = nn.Linear(20, 30)>>> input = torch.randn(128, 20)>>> output = m(input)>>> print(output.size())torch.Size([128, 30])"""__constants__ = ['in_features', 'out_features']in_features: intout_features: intweight: Tensordef __init__(self, in_features: int, out_features: int, bias: bool = True,device=None, dtype=None) -> None:factory_kwargs = {'device': device, 'dtype': dtype}super(myLinear, self).__init__()self.in_features = in_featuresself.out_features = out_featuresself.weight = Parameter(torch.empty((out_features, in_features), **factory_kwargs))if bias:self.bias = Parameter(torch.empty(out_features, **factory_kwargs))else:self.register_parameter('bias', None)self.reset_parameters()def reset_parameters(self) -> None:# Setting a=sqrt(5) in kaiming_uniform is the same as initializing with# uniform(-1/sqrt(in_features), 1/sqrt(in_features)). For details, see# https://github.com/pytorch/pytorch/issues/57109print("333")init.kaiming_uniform_(self.weight, a=math.sqrt(5))if self.bias is not None:fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight)bound = 1 / math.sqrt(fan_in) if fan_in > 0 else 0init.uniform_(self.bias, -bound, bound)def forward(self, input: Tensor) -> Tensor:print("111")print("self.weight.shape=(", )return F.linear(input, self.weight, self.bias)def extra_repr(self) -> str:print("www")return 'in_features={}, out_features={}, bias={}'.format(self.in_features, self.out_features, self.bias is not None)# m = myLinear(20, 40) # input = torch.randn(128, 40, 20) # output = m(input) # print(output.size())m = myLinear(128, 64) input = torch.randn(512, 3,128,128) output = m(input) print(output.size()) # [(512, 3,128,64))

4. nn.Linear的官方源碼:

import mathimport torch from torch import Tensor from torch.nn.parameter import Parameter, UninitializedParameter from .. import functional as F from .. import init from .module import Module from .lazy import LazyModuleMixinclass Identity(Module):r"""A placeholder identity operator that is argument-insensitive.Args:args: any argument (unused)kwargs: any keyword argument (unused)Shape:- Input: :math:`(*)`, where :math:`*` means any number of dimensions.- Output: :math:`(*)`, same shape as the input.Examples::>>> m = nn.Identity(54, unused_argument1=0.1, unused_argument2=False)>>> input = torch.randn(128, 20)>>> output = m(input)>>> print(output.size())torch.Size([128, 20])"""def __init__(self, *args, **kwargs):super(Identity, self).__init__()def forward(self, input: Tensor) -> Tensor:return inputclass Linear(Module):r"""Applies a linear transformation to the incoming data: :math:`y = xA^T + b`This module supports :ref:`TensorFloat32<tf32_on_ampere>`.Args:in_features: size of each input sampleout_features: size of each output samplebias: If set to ``False``, the layer will not learn an additive bias.Default: ``True``Shape:- Input: :math:`(*, H_{in})` where :math:`*` means any number ofdimensions including none and :math:`H_{in} = \text{in\_features}`.- Output: :math:`(*, H_{out})` where all but the last dimensionare the same shape as the input and :math:`H_{out} = \text{out\_features}`.Attributes:weight: the learnable weights of the module of shape:math:`(\text{out\_features}, \text{in\_features})`. The values areinitialized from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})`, where:math:`k = \frac{1}{\text{in\_features}}`bias: the learnable bias of the module of shape :math:`(\text{out\_features})`.If :attr:`bias` is ``True``, the values are initialized from:math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})` where:math:`k = \frac{1}{\text{in\_features}}`Examples::>>> m = nn.Linear(20, 30)>>> input = torch.randn(128, 20)>>> output = m(input)>>> print(output.size())torch.Size([128, 30])"""__constants__ = ['in_features', 'out_features']in_features: intout_features: intweight: Tensordef __init__(self, in_features: int, out_features: int, bias: bool = True,device=None, dtype=None) -> None:factory_kwargs = {'device': device, 'dtype': dtype}super(Linear, self).__init__()self.in_features = in_featuresself.out_features = out_featuresself.weight = Parameter(torch.empty((out_features, in_features), **factory_kwargs))if bias:self.bias = Parameter(torch.empty(out_features, **factory_kwargs))else:self.register_parameter('bias', None)self.reset_parameters()def reset_parameters(self) -> None:# Setting a=sqrt(5) in kaiming_uniform is the same as initializing with# uniform(-1/sqrt(in_features), 1/sqrt(in_features)). For details, see# https://github.com/pytorch/pytorch/issues/57109init.kaiming_uniform_(self.weight, a=math.sqrt(5))if self.bias is not None:fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight)bound = 1 / math.sqrt(fan_in) if fan_in > 0 else 0init.uniform_(self.bias, -bound, bound)def forward(self, input: Tensor) -> Tensor:return F.linear(input, self.weight, self.bias)def extra_repr(self) -> str:return 'in_features={}, out_features={}, bias={}'.format(self.in_features, self.out_features, self.bias is not None)# This class exists solely to avoid triggering an obscure error when scripting # an improperly quantized attention layer. See this issue for details: # https://github.com/pytorch/pytorch/issues/58969 # TODO: fail fast on quantization API usage error, then remove this class # and replace uses of it with plain Linear class NonDynamicallyQuantizableLinear(Linear):def __init__(self, in_features: int, out_features: int, bias: bool = True,device=None, dtype=None) -> None:super().__init__(in_features, out_features, bias=bias,device=device, dtype=dtype)[docs]class Bilinear(Module):r"""Applies a bilinear transformation to the incoming data::math:`y = x_1^T A x_2 + b`Args:in1_features: size of each first input samplein2_features: size of each second input sampleout_features: size of each output samplebias: If set to False, the layer will not learn an additive bias.Default: ``True``Shape:- Input1: :math:`(*, H_{in1})` where :math:`H_{in1}=\text{in1\_features}` and:math:`*` means any number of additional dimensions including none. All but the last dimensionof the inputs should be the same.- Input2: :math:`(*, H_{in2})` where :math:`H_{in2}=\text{in2\_features}`.- Output: :math:`(*, H_{out})` where :math:`H_{out}=\text{out\_features}`and all but the last dimension are the same shape as the input.Attributes:weight: the learnable weights of the module of shape:math:`(\text{out\_features}, \text{in1\_features}, \text{in2\_features})`.The values are initialized from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})`, where:math:`k = \frac{1}{\text{in1\_features}}`bias: the learnable bias of the module of shape :math:`(\text{out\_features})`.If :attr:`bias` is ``True``, the values are initialized from:math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})`, where:math:`k = \frac{1}{\text{in1\_features}}`Examples::>>> m = nn.Bilinear(20, 30, 40)>>> input1 = torch.randn(128, 20)>>> input2 = torch.randn(128, 30)>>> output = m(input1, input2)>>> print(output.size())torch.Size([128, 40])"""__constants__ = ['in1_features', 'in2_features', 'out_features']in1_features: intin2_features: intout_features: intweight: Tensordef __init__(self, in1_features: int, in2_features: int, out_features: int, bias: bool = True,device=None, dtype=None) -> None:factory_kwargs = {'device': device, 'dtype': dtype}super(Bilinear, self).__init__()self.in1_features = in1_featuresself.in2_features = in2_featuresself.out_features = out_featuresself.weight = Parameter(torch.empty((out_features, in1_features, in2_features), **factory_kwargs))if bias:self.bias = Parameter(torch.empty(out_features, **factory_kwargs))else:self.register_parameter('bias', None)self.reset_parameters()def reset_parameters(self) -> None:bound = 1 / math.sqrt(self.weight.size(1))init.uniform_(self.weight, -bound, bound)if self.bias is not None:init.uniform_(self.bias, -bound, bound)def forward(self, input1: Tensor, input2: Tensor) -> Tensor:return F.bilinear(input1, input2, self.weight, self.bias)def extra_repr(self) -> str:return 'in1_features={}, in2_features={}, out_features={}, bias={}'.format(self.in1_features, self.in2_features, self.out_features, self.bias is not None)class LazyLinear(LazyModuleMixin, Linear):r"""A :class:`torch.nn.Linear` module where `in_features` is inferred.In this module, the `weight` and `bias` are of :class:`torch.nn.UninitializedParameter`class. They will be initialized after the first call to ``forward`` is done and themodule will become a regular :class:`torch.nn.Linear` module. The ``in_features`` argumentof the :class:`Linear` is inferred from the ``input.shape[-1]``.Check the :class:`torch.nn.modules.lazy.LazyModuleMixin` for further documentationon lazy modules and their limitations.Args:out_features: size of each output samplebias: If set to ``False``, the layer will not learn an additive bias.Default: ``True``Attributes:weight: the learnable weights of the module of shape:math:`(\text{out\_features}, \text{in\_features})`. The values areinitialized from :math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})`, where:math:`k = \frac{1}{\text{in\_features}}`bias: the learnable bias of the module of shape :math:`(\text{out\_features})`.If :attr:`bias` is ``True``, the values are initialized from:math:`\mathcal{U}(-\sqrt{k}, \sqrt{k})` where:math:`k = \frac{1}{\text{in\_features}}`"""cls_to_become = Linear # type: ignore[assignment]weight: UninitializedParameterbias: UninitializedParameter # type: ignore[assignment]def __init__(self, out_features: int, bias: bool = True,device=None, dtype=None) -> None:factory_kwargs = {'device': device, 'dtype': dtype}# bias is hardcoded to False to avoid creating tensor# that will soon be overwritten.super().__init__(0, 0, False)self.weight = UninitializedParameter(**factory_kwargs)self.out_features = out_featuresif bias:self.bias = UninitializedParameter(**factory_kwargs)def reset_parameters(self) -> None:if not self.has_uninitialized_params() and self.in_features != 0:super().reset_parameters()def initialize_parameters(self, input) -> None: # type: ignore[override]if self.has_uninitialized_params():with torch.no_grad():self.in_features = input.shape[-1]self.weight.materialize((self.out_features, self.in_features))if self.bias is not None:self.bias.materialize((self.out_features,))self.reset_parameters() # TODO: PartialLinear - maybe in sparse?

總結(jié)

以上是生活随笔為你收集整理的torch.nn.Linear详解的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。