日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 人文社科 > 生活经验 >内容正文

生活经验

deeplearning模型分析

發(fā)布時(shí)間:2023/11/28 生活经验 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 deeplearning模型分析 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

deeplearning模型分析
FLOPs
paddleslim.analysis.flops(program, detail=False)
獲得指定網(wǎng)絡(luò)的浮點(diǎn)運(yùn)算次數(shù)(FLOPs)。
參數(shù):
? program(paddle.fluid.Program) - 待分析的目標(biāo)網(wǎng)絡(luò)。更多關(guān)于Program的介紹請(qǐng)參考:Program概念介紹。
? detail(bool) - 是否返回每個(gè)卷積層的FLOPs。默認(rèn)為False。
? only_conv(bool) - 如果設(shè)置為True,則僅計(jì)算卷積層和全連接層的FLOPs,即浮點(diǎn)數(shù)的乘加(multiplication-adds)操作次數(shù)。如果設(shè)置為False,則也會(huì)計(jì)算卷積和全連接層之外的操作的FLOPs。
返回值:
? flops(float) - 整個(gè)網(wǎng)絡(luò)的FLOPs。
? params2flops(dict) - 每層卷積對(duì)應(yīng)的FLOPs,其中key為卷積層參數(shù)名稱,value為FLOPs值。
示例:
import paddle.fluid as fluid
from paddle.fluid.param_attr import ParamAttr
from paddleslim.analysis import flops

def conv_bn_layer(input,
num_filters,
filter_size,
name,
stride=1,
groups=1,
act=None):
conv = fluid.layers.conv2d(
input=input,
num_filters=num_filters,
filter_size=filter_size,
stride=stride,
padding=(filter_size - 1) // 2,
groups=groups,
act=None,
param_attr=ParamAttr(name=name + “_weights”),
bias_attr=False,
name=name + “_out”)
bn_name = name + “_bn”
return fluid.layers.batch_norm(
input=conv,
act=act,
name=bn_name + ‘_output’,
param_attr=ParamAttr(name=bn_name + ‘_scale’),
bias_attr=ParamAttr(bn_name + ‘_offset’),
moving_mean_name=bn_name + ‘_mean’,
moving_variance_name=bn_name + ‘_variance’, )

main_program = fluid.Program()
startup_program = fluid.Program()

X X O X O

conv1–>conv2–>sum1–>conv3–>conv4–>sum2–>conv5–>conv6

| ^ | ^

|| |________|

X: prune output channels

O: prune input channels

with fluid.program_guard(main_program, startup_program):
input = fluid.data(name=“image”, shape=[None, 3, 16, 16])
conv1 = conv_bn_layer(input, 8, 3, “conv1”)
conv2 = conv_bn_layer(conv1, 8, 3, “conv2”)
sum1 = conv1 + conv2
conv3 = conv_bn_layer(sum1, 8, 3, “conv3”)
conv4 = conv_bn_layer(conv3, 8, 3, “conv4”)
sum2 = conv4 + sum1
conv5 = conv_bn_layer(sum2, 8, 3, “conv5”)
conv6 = conv_bn_layer(conv5, 8, 3, “conv6”)

print(“FLOPs: {}”.format(flops(main_program)))
model_size
paddleslim.analysis.model_size(program)
獲得指定網(wǎng)絡(luò)的參數(shù)數(shù)量。
參數(shù):
? program(paddle.fluid.Program) - 待分析的目標(biāo)網(wǎng)絡(luò)。更多關(guān)于Program的介紹請(qǐng)參考:Program概念介紹。
返回值:
? model_size(int) - 整個(gè)網(wǎng)絡(luò)的參數(shù)數(shù)量。
示例:
import paddle.fluid as fluid
from paddle.fluid.param_attr import ParamAttr
from paddleslim.analysis import model_size

def conv_layer(input,
num_filters,
filter_size,
name,
stride=1,
groups=1,
act=None):
conv = fluid.layers.conv2d(
input=input,
num_filters=num_filters,
filter_size=filter_size,
stride=stride,
padding=(filter_size - 1) // 2,
groups=groups,
act=None,
param_attr=ParamAttr(name=name + “_weights”),
bias_attr=False,
name=name + “_out”)
return conv

main_program = fluid.Program()
startup_program = fluid.Program()

X X O X O

conv1–>conv2–>sum1–>conv3–>conv4–>sum2–>conv5–>conv6

| ^ | ^

|| |________|

X: prune output channels

O: prune input channels

with fluid.program_guard(main_program, startup_program):
input = fluid.data(name=“image”, shape=[None, 3, 16, 16])
conv1 = conv_layer(input, 8, 3, “conv1”)
conv2 = conv_layer(conv1, 8, 3, “conv2”)
sum1 = conv1 + conv2
conv3 = conv_layer(sum1, 8, 3, “conv3”)
conv4 = conv_layer(conv3, 8, 3, “conv4”)
sum2 = conv4 + sum1
conv5 = conv_layer(sum2, 8, 3, “conv5”)
conv6 = conv_layer(conv5, 8, 3, “conv6”)

print(“FLOPs: {}”.format(model_size(main_program)))
TableLatencyEvaluator
classpaddleslim.analysis.TableLatencyEvaluator(table_file, delimiter=", ")
基于硬件延時(shí)表的模型延時(shí)評(píng)估器。
參數(shù):
? table_file(str) - 所使用的延時(shí)評(píng)估表的絕對(duì)路徑。關(guān)于演示評(píng)估表格式請(qǐng)參考:PaddleSlim硬件延時(shí)評(píng)估表格式
? delimiter(str) - 在硬件延時(shí)評(píng)估表中,操作信息之前所使用的分割符,默認(rèn)為英文字符逗號(hào)。
返回值:
? Evaluator - 硬件延時(shí)評(píng)估器的實(shí)例。
latency(graph)
獲得指定網(wǎng)絡(luò)的預(yù)估延時(shí)。
參數(shù):
o graph(Program) - 待預(yù)估的目標(biāo)網(wǎng)絡(luò)。
返回值:
o latency - 目標(biāo)網(wǎng)絡(luò)的預(yù)估延時(shí)。

總結(jié)

以上是生活随笔為你收集整理的deeplearning模型分析的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。