日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程语言 > python >内容正文

python

python 写入网络视频文件很慢_用Python将数据写入LMDB非常慢

發布時間:2025/6/17 python 44 豆豆
生活随笔 收集整理的這篇文章主要介紹了 python 写入网络视频文件很慢_用Python将数据写入LMDB非常慢 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

Creating datasets for training with Caffe I both tried using HDF5 and LMDB. However, creating a LMDB is very slow even slower than HDF5. I am trying to write ~20,000 images.

Am I doing something terribly wrong? Is there something I am not aware of?

This is my code for LMDB creation:

DB_KEY_FORMAT = "{:0>10d}"

db = lmdb.open(path, map_size=int(1e12))

curr_idx = 0

commit_size = 1000

for curr_commit_idx in range(0, num_data, commit_size):

with in_db_data.begin(write=True) as in_txn:

for i in range(curr_commit_idx, min(curr_commit_idx + commit_size, num_data)):

d, l = data[i], labels[i]

im_dat = caffe.io.array_to_datum(d.astype(float), label=int(l))

key = DB_KEY_FORMAT.format(curr_idx)

in_txn.put(key, im_dat.SerializeToString())

curr_idx += 1

db.close()

As you can see I am creating a transaction for every 1,000 images, because I thought creating a transaction for each image would create an overhead, but it seems this doesn't influence performance too much.

解決方案

In my experience, I've had 50-100 ms writes to LMDB from Python writing Caffe data on ext4 hard disk on Ubuntu. That's why I use tmpfs (RAM disk functionality built into Linux) and get these writes done in around 0.07 ms. You can make smaller databases on your ramdisk and copy them to a hard disk and later train on all of them. I'm making around 20-40GB ones as I have 64 GB of RAM.

Some pieces of code to help you guys dynamically create, fill and move LMDBs to storage. Feel free to edit it to fit your case. It should save you some time getting your head around how LMDB and file manipulation works in Python.

import shutil

import lmdb

import random

def move_db():

global image_db

image_db.close();

rnd = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(5))

shutil.move( fold + 'ram/train_images', '/storage/lmdb/'+rnd)

open_db()

def open_db():

global image_db

image_db = lmdb.open(os.path.join(fold, 'ram/train_images'),

map_async=True,

max_dbs=0)

def write_to_lmdb(db, key, value):

"""

Write (key,value) to db

"""

success = False

while not success:

txn = db.begin(write=True)

try:

txn.put(key, value)

txn.commit()

success = True

except lmdb.MapFullError:

txn.abort()

# double the map_size

curr_limit = db.info()['map_size']

new_limit = curr_limit*2

print '>>> Doubling LMDB map size to %sMB ...' % (new_limit>>20,)

db.set_mapsize(new_limit) # double it

...

image_datum = caffe.io.array_to_datum( transformed_image, label )

write_to_lmdb(image_db, str(itr), image_datum.SerializeToString())

總結

以上是生活随笔為你收集整理的python 写入网络视频文件很慢_用Python将数据写入LMDB非常慢的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。