MinkowskiPooling池化(上)
MinkowskiPooling池化(上)
如果內(nèi)核大小等于跨步大小(例如kernel_size = [2,1],跨步= [2,1]),則引擎將更快地生成與池化函數(shù)相對(duì)應(yīng)的輸入輸出映射。
如果使用U網(wǎng)絡(luò)架構(gòu),請(qǐng)使用相同功能的轉(zhuǎn)置版本進(jìn)行上采樣。例如pool = MinkowskiSumPooling(kernel_size = 2,stride = 2,D = D),然后使用 unpool = MinkowskiPoolingTranspose(kernel_size = 2,stride = 2,D = D)。
stride (int, or list, optional):卷積層的步幅大小。如果使用非同一性,則輸出坐標(biāo)至少為stride ×× tensor_stride 遠(yuǎn)。給出列表時(shí),長(zhǎng)度必須為D;每個(gè)元素將用于特定軸的步幅大小。
dilation (int, or list, optional):卷積內(nèi)核的擴(kuò)展大小。給出列表時(shí),長(zhǎng)度必須為D,并且每個(gè)元素都是軸特定的膨脹。所有元素必須> 0。
kernel_generator (MinkowskiEngine.KernelGenerator, optional):定義自定義內(nèi)核形狀。
dimension(int):定義所有輸入和網(wǎng)絡(luò)的空間的空間尺寸。例如,圖像在2D空間中,網(wǎng)格和3D形狀在3D空間中。
當(dāng)kernel_size ==跨度時(shí),不支持自定義內(nèi)核形狀。
cpu() → T
將所有模型參數(shù)和緩沖區(qū)移至CPU。
Returns:
Module: self
cuda(device: Optional[Union[int, torch.device]] = None) → T
將所有模型參數(shù)和緩沖區(qū)移至GPU。
這也使關(guān)聯(lián)的參數(shù)并緩沖不同的對(duì)象。因此,在構(gòu)建優(yōu)化程序之前,如果模塊在優(yōu)化過程中可以在GPU上運(yùn)行,則應(yīng)調(diào)用它。
Arguments:
device (int, optional): if specified, all parameters will be
copied to that device
Returns:
Module: self
double() → T
將所有浮點(diǎn)參數(shù)和緩沖區(qū)強(qiáng)制轉(zhuǎn)換為double數(shù)據(jù)類型。
返回值:
模塊:self
float() →T
將所有浮點(diǎn)參數(shù)和緩沖區(qū)強(qiáng)制轉(zhuǎn)換為float數(shù)據(jù)類型。
返回值:
模塊:self
forward(input: SparseTensor.SparseTensor, coords: Union[torch.IntTensor, MinkowskiCoords.CoordsKey, SparseTensor.SparseTensor] = None)
input (MinkowskiEngine.SparseTensor):輸入稀疏張量以對(duì)其進(jìn)行卷積。
coords ((torch.IntTensor, MinkowskiEngine.CoordsKey, MinkowskiEngine.SparseTensor), optional):如果提供,則在提供的坐標(biāo)上生成結(jié)果。默認(rèn)情況下沒有。
to(*args, **kwargs)
Moves and/or casts the parameters and buffers.
This can be called as
to(device=None, dtype=None, non_blocking=False)
to(dtype, non_blocking=False)
to(tensor, non_blocking=False)
to(memory_format=torch.channels_last)
其簽名類似于torch.Tensor.to(),但僅接受所需dtype的浮點(diǎn)s。另外,此方法將僅將浮點(diǎn)參數(shù)和緩沖區(qū)強(qiáng)制轉(zhuǎn)換為dtype (如果給定的話)。device如果給定了整數(shù)參數(shù)和緩沖區(qū) ,dtype不變。當(dāng) non_blocking被設(shè)置時(shí),它試圖轉(zhuǎn)換/如果可能異步相對(duì)于移動(dòng)到主機(jī),例如,移動(dòng)CPU張量與固定內(nèi)存到CUDA設(shè)備。
此方法local修改模塊。
Args:
device (torch.device): the desired device of the parameters
and buffers in this module
dtype (torch.dtype): the desired floating point type of
the floating point parameters and buffers in this module
tensor (torch.Tensor): Tensor whose dtype and device are the desired
dtype and device for all parameters and buffers in this module
memory_format (torch.memory_format): the desired memory
format for 4D parameters and buffers in this module (keyword only argument)
Returns:
Module: self
Example:
linear = nn.Linear(2, 2)
linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]])linear.to(torch.double)
Linear(in_features=2, out_features=2, bias=True)linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]], dtype=torch.float64)gpu1 = torch.device(“cuda:1”)
linear.to(gpu1, dtype=torch.half, non_blocking=True)
Linear(in_features=2, out_features=2, bias=True)linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16, device=‘cuda:1’)cpu = torch.device(“cpu”)
linear.to(cpu)
Linear(in_features=2, out_features=2, bias=True)linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16)
type(dst_type: Union[torch.dtype, str]) → T
Casts all parameters and buffers to dst_type.
Arguments:
dst_type (type or string): the desired type
Returns:
Module: self
MinkowskiAvgPooling
平均層首先計(jì)算輸入要素的基數(shù),每個(gè)輸出的輸入要素?cái)?shù),然后將輸入要素的總和除以基數(shù)。對(duì)于密集的張量,基數(shù)是一個(gè)常數(shù),即內(nèi)核的體積。但是,對(duì)于張量稀疏,基數(shù)取決于每個(gè)輸出的輸入特征的數(shù)量而變化。因此,稀疏張量的平均池化不等于傳統(tǒng)的密集張量的平均池化層。請(qǐng)參考MinkowskiSumPooling等效層。
如果內(nèi)核大小等于跨步大小(例如kernel_size = [2,1],跨步= [2,1]),則引擎將更快地生成與池化函數(shù)相對(duì)應(yīng)的輸入輸出映射。
如果使用U網(wǎng)絡(luò)架構(gòu),請(qǐng)使用相同功能的轉(zhuǎn)置版本進(jìn)行上采樣。例如pool = MinkowskiSumPooling(kernel_size = 2,stride = 2,D = D),然后使用 unpool = MinkowskiPoolingTranspose(kernel_size = 2,stride = 2,D = D)。
init(kernel_size=- 1, stride=1, dilation=1, kernel_generator=None, dimension=None)
高維稀疏平均池化層。
kernel_size (int, optional): the size of the kernel in the output tensor. If not provided, region_offset should be RegionType.CUSTOM and region_offset should be a 2D matrix with size N×DN×D such that it lists all NN offsets in D-dimension.
stride (int, or list, optional): stride size of the convolution layer. If non-identity is used, the output coordinates will be at least stride ×× tensor_stride away. When a list is given, the length must be D; each element will be used for stride size for the specific axis.
dilation (int, or list, optional): dilation size for the convolution kernel. When a list is given, the length must be D and each element is an axis specific dilation. All elements must be > 0.
kernel_generator (MinkowskiEngine.KernelGenerator, optional): define custom kernel shape.
dimension (int): the spatial dimension of the space where all the inputs and the network are defined. For example, images are in a 2D space, meshes and 3D shapes are in a 3D space.
當(dāng)kernel_size ==跨度時(shí),不支持自定義內(nèi)核形狀。
cpu() →T
將所有模型參數(shù)和緩沖區(qū)移至CPU。
返回值:
模塊:self
cuda(device: Optional[Union[int, torch.device]] = None) → T
所有模型參數(shù)和緩沖區(qū)移至GPU。
這也使關(guān)聯(lián)的參數(shù)并緩沖不同的對(duì)象。因此,在構(gòu)建優(yōu)化程序之前,如果模塊在優(yōu)化過程中可以在GPU上運(yùn)行,則應(yīng)調(diào)用它。
Arguments:
device (int, optional): if specified, all parameters will be
copied to that device
Returns:
Module: self
double() → T
將所有浮點(diǎn)參數(shù)和緩沖區(qū)強(qiáng)制轉(zhuǎn)換為double數(shù)據(jù)類型。
返回值:
模塊:self
float() →T
將所有浮點(diǎn)參數(shù)和緩沖區(qū)強(qiáng)制轉(zhuǎn)換為float數(shù)據(jù)類型。
返回值:
模塊:self
forward(input: SparseTensor.SparseTensor, coords: Union[torch.IntTensor, MinkowskiCoords.CoordsKey, SparseTensor.SparseTensor] = None)
input (MinkowskiEngine.SparseTensor): Input sparse tensor to apply a convolution on.
coords ((torch.IntTensor, MinkowskiEngine.CoordsKey, MinkowskiEngine.SparseTensor), optional): If provided, generate results on the provided coordinates. None by default.
to(*args, **kwargs)
Moves and/or casts the parameters and buffers.
This can be called as
to(device=None, dtype=None, non_blocking=False)
to(dtype, non_blocking=False)
to(tensor, non_blocking=False)
to(memory_format=torch.channels_last)
其簽名類似于torch.Tensor.to(),但僅接受所需dtype的浮點(diǎn)s。另外,此方法將僅將浮點(diǎn)參數(shù)和緩沖區(qū)強(qiáng)制轉(zhuǎn)換為dtype (如果給定的話)。device如果給定了整數(shù)參數(shù)和緩沖區(qū) ,dtype不變。當(dāng) non_blocking被設(shè)置時(shí),它試圖轉(zhuǎn)換/如果可能異步相對(duì)于移動(dòng)到主機(jī),例如,移動(dòng)CPU張量與固定內(nèi)存到CUDA設(shè)備。
This method modifies the module in-place.
Args:
device (torch.device): the desired device of the parameters
and buffers in this module
dtype (torch.dtype): the desired floating point type of
the floating point parameters and buffers in this module
tensor (torch.Tensor): Tensor whose dtype and device are the desired
dtype and device for all parameters and buffers in this module
memory_format (torch.memory_format): the desired memory
format for 4D parameters and buffers in this module (keyword only argument)
Returns:
Module: self
Example:
linear = nn.Linear(2, 2)
linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]])linear.to(torch.double)
Linear(in_features=2, out_features=2, bias=True)linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]], dtype=torch.float64)gpu1 = torch.device(“cuda:1”)
linear.to(gpu1, dtype=torch.half, non_blocking=True)
Linear(in_features=2, out_features=2, bias=True)linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16, device=‘cuda:1’)cpu = torch.device(“cpu”)
linear.to(cpu)
Linear(in_features=2, out_features=2, bias=True)linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16)
type(dst_type: Union[torch.dtype, str]) → T
Casts all parameters and buffers to dst_type.
Arguments:
dst_type (type or string): the desired type
Returns:
Module: self
MinkowskiSumPooling
class MinkowskiEngine.MinkowskiSumPooling(kernel_size, stride=1, dilation=1, kernel_generator=None, dimension=None)
Sum all input features within a kernel.
平均層首先計(jì)算輸入要素的基數(shù),每個(gè)輸出的輸入要素?cái)?shù),然后將輸入要素的總和除以基數(shù)。對(duì)于密集的張量,基數(shù)是一個(gè)常數(shù),即內(nèi)核的體積。但是,對(duì)于張量稀疏,基數(shù)根據(jù)每個(gè)輸出的輸入特征數(shù)而變化。因此,用基數(shù)對(duì)輸入特征求平均可能不等于密集張量的常規(guī)平均池。該層提供了一種不將總和除以基數(shù)的方法。
如果內(nèi)核大小等于跨步大小(例如kernel_size = [2,1],跨步= [2,1]),則引擎將更快地生成與池化函數(shù)相對(duì)應(yīng)的輸入輸出映射。
如果使用U網(wǎng)絡(luò)架構(gòu),請(qǐng)使用相同功能的轉(zhuǎn)置版本進(jìn)行上采樣。例如pool = MinkowskiSumPooling(kernel_size = 2,stride = 2,D = D),然后使用 unpool = MinkowskiPoolingTranspose(kernel_size = 2,stride = 2,D = D)。
init(kernel_size, stride=1, dilation=1, kernel_generator=None, dimension=None)
a high-dimensional sum pooling layer
Args:
kernel_size (int, optional): the size of the kernel in the output tensor. If not provided, region_offset should be RegionType.CUSTOM and region_offset should be a 2D matrix with size N×DN×D such that it lists all NN offsets in D-dimension.
stride (int, or list, optional): stride size of the convolution layer. If non-identity is used, the output coordinates will be at least stride ×× tensor_stride away. When a list is given, the length must be D; each element will be used for stride size for the specific axis.
dilation (int, or list, optional): dilation size for the convolution kernel. When a list is given, the length must be D and each element is an axis specific dilation. All elements must be > 0.
kernel_generator (MinkowskiEngine.KernelGenerator, optional): define custom kernel shape.
dimension (int): the spatial dimension of the space where all the inputs and the network are defined. For example, images are in a 2D space, meshes and 3D shapes are in a 3D space.
當(dāng)kernel_size ==跨度時(shí),不支持自定義內(nèi)核形狀。
cpu() →T
將所有模型參數(shù)和緩沖區(qū)移至CPU。
返回值:
模塊:self
cuda(device: Optional[Union[int, torch.device]] = None) → T
將所有模型參數(shù)和緩沖區(qū)移至GPU。
這也使關(guān)聯(lián)的參數(shù)并緩沖不同的對(duì)象。因此,在構(gòu)建優(yōu)化程序之前,如果模塊在優(yōu)化過程中可以在GPU上運(yùn)行,則應(yīng)調(diào)用它。
參數(shù):
device (int, optional): if specified, all parameters will be
copied to that device
Returns:
Module: self
double() → T
將所有浮點(diǎn)參數(shù)和緩沖區(qū)強(qiáng)制轉(zhuǎn)換為double數(shù)據(jù)類型。
返回值:
模塊:self
float() →T
將所有浮點(diǎn)參數(shù)和緩沖區(qū)強(qiáng)制轉(zhuǎn)換為float數(shù)據(jù)類型。
返回值:
模塊:self
forward(input: SparseTensor.SparseTensor, coords: Union[torch.IntTensor, MinkowskiCoords.CoordsKey, SparseTensor.SparseTensor] = None)
input (MinkowskiEngine.SparseTensor): Input sparse tensor to apply a convolution on.
coords ((torch.IntTensor, MinkowskiEngine.CoordsKey, MinkowskiEngine.SparseTensor), optional): If provided, generate results on the provided coordinates. None by default.
to(*args, **kwargs)
Moves and/or casts the parameters and buffers.
This can be called as
to(device=None, dtype=None, non_blocking=False)
to(dtype, non_blocking=False)
to(tensor, non_blocking=False)
to(memory_format=torch.channels_last)
其簽名類似于torch.Tensor.to(),但僅接受所需dtype的浮點(diǎn)s。另外,此方法將僅將浮點(diǎn)參數(shù)和緩沖區(qū)強(qiáng)制轉(zhuǎn)換為dtype (如果給定的話)。device如果給定了整數(shù)參數(shù)和緩沖區(qū) ,dtype不變。當(dāng) non_blocking被設(shè)置時(shí),它試圖轉(zhuǎn)換/如果可能異步相對(duì)于移動(dòng)到主機(jī),例如,移動(dòng)CPU張量與固定內(nèi)存到CUDA設(shè)備。
請(qǐng)參見下面的示例。
Args:
device (torch.device): the desired device of the parameters
and buffers in this module
dtype (torch.dtype): the desired floating point type of
the floating point parameters and buffers in this module
tensor (torch.Tensor): Tensor whose dtype and device are the desired
dtype and device for all parameters and buffers in this module
memory_format (torch.memory_format): the desired memory
format for 4D parameters and buffers in this module (keyword only argument)
Returns:
Module: self
Example:
linear = nn.Linear(2, 2)
linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]])linear.to(torch.double)
Linear(in_features=2, out_features=2, bias=True)linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]], dtype=torch.float64)gpu1 = torch.device(“cuda:1”)
linear.to(gpu1, dtype=torch.half, non_blocking=True)
Linear(in_features=2, out_features=2, bias=True)linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16, device=‘cuda:1’)cpu = torch.device(“cpu”)
linear.to(cpu)
Linear(in_features=2, out_features=2, bias=True)linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16)
type(dst_type: Union[torch.dtype, str]) → T
Casts all parameters and buffers to dst_type.
Arguments:
dst_type (type or string): the desired type
Returns:
Module: self
總結(jié)
以上是生活随笔為你收集整理的MinkowskiPooling池化(上)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 稀疏张量基础
- 下一篇: MinkowskiPooling池化(下