python csv 大文件_Python性能调优:JSON到CSV,大文件
一位同事要求我將"yelp數(shù)據(jù)集挑戰(zhàn)"中的6個大文件從"扁平"的常規(guī)JSON轉(zhuǎn)換為CSV(他認為這些文件看起來像有趣的教學(xué)數(shù)據(jù))。
我想我可以用:
1
2
3
4
5
6
7
8
9
10
11
12
13# With thanks to http://www.diveintopython3.net/files.html and https://www.reddit.com/r/MachineLearning/comments/33eglq/python_help_jsoncsv_pandas/cqkwyu8/
import os
import pandas
jsondir = 'c:\\example\\bigfiles\'
csvdir = 'c:\\example\\bigcsvfiles\'
if not os.path.exists(csvdir): os.makedirs(csvdir)
for file in os.listdir(jsondir):
with open(jsondir+file, 'r', encoding='utf-8') as f: data = f.readlines()
df = pandas.read_json('[' + ','.join(map(lambda x: x.rstrip(), data)) + ']')
df.to_csv(csvdir+os.path.splitext(file)[0]+'.csv',index=0,quoting=1)
不幸的是,我的計算機內(nèi)存達不到這個文件大小的任務(wù)。(即使我擺脫了循環(huán),雖然它在不到一分鐘的時間內(nèi)發(fā)出50MB的文件,但它仍在努力避免凍結(jié)我的計算機或崩潰在100MB以上的文件上,最大的文件是3.25GB。)
是否還有其他簡單但性能良好的東西可以替代?
在一個循環(huán)中是很好的,但是如果文件名對內(nèi)存有影響的話(只有6個文件),我也可以用單獨的文件名運行6次。
這里是一個".json"文件內(nèi)容的例子——注意每個文件實際上有很多json對象,每行1個。
1
2
3{"business_id":"xyzzy","name":"Business A","neighborhood":"","address":"XX YY ZZ","city":"Tempe","state":"AZ","postal_code":"85283","latitude":33.32823894longitude":-111.28948,"stars":3,"review_count":3,"is_open":0,"attributes":["BikeParking: True","BusinessAcceptsBitcoin: False","BusinessAcceptsCreditCards: True","BusinessParking: {'garage': False, 'street': False, 'validated': False, 'lot': True, 'valet': False}","DogsAllowed: False","RestaurantsPriceRange2: 2","WheelchairAccessible: True"],"categories":["Tobacco Shops","Nightlife","Vape Shops","Shopping"],"hours":["Monday 11:0-21:0","Tuesday 11:0-21:0","Wednesday 11:0-21:0","Thursday 11:0-21:0","Friday 11:0-22:0","Saturday 10:0-22:0","Sunday 11:0-18:0"],"type":"business"}
{"business_id":"dsfiuweio2f","name":"Some Place","neighborhood":"","address":"Strip or something","city":"Las Vegas","state":"NV","postal_code":"89106","latitude":36.189134,"longitude":-115.92094,"stars":1.5,"review_count":2,"is_open":1,"attributes":["BusinessAcceptsBitcoin: False","BusinessAcceptsCreditCards: True"],"categories":["Caterers","Grocery","Food","Event Planning & Services","Party & Event Planning","Specialty Food"],"hours":["Monday 0:0-0:0","Tuesday 0:0-0:0","Wednesday 0:0-0:0","Thursday 0:0-0:0","Friday 0:0-0:0","Saturday 0:0-0:0","Sunday 0:0-0:0"],"type":"business"}
{"business_id":"abccb","name":"La la la","neighborhood":"Blah blah","address":"Yay that","city":"Toronto","state":"ON","postal_code":"M6H 1L5","latitude":43.283984,"longitude":-79.28284,"stars":2,"review_count":6,"is_open":1,"attributes":["Alcohol: none","Ambience: {'romantic': False, 'intimate': False, 'classy': False, 'hipster': False, 'touristy': False, 'trendy': False, 'upscale': False, 'casual': False}","BikeParking: True","BusinessAcceptsCreditCards: True","BusinessParking: {'garage': False, 'street': False, 'validated': False, 'lot': False, 'valet': False}","Caters: True","GoodForKids: True","GoodForMeal: {'dessert': False, 'latenight': False, 'lunch': False, 'dinner': False, 'breakfast': False, 'brunch': False}","HasTV: True","NoiseLevel: quiet","OutdoorSeating: False","RestaurantsAttire: casual","RestaurantsDelivery: True","RestaurantsGoodForGroups: True","RestaurantsPriceRange2: 1","RestaurantsReservations: False","RestaurantsTableService: False","RestaurantsTakeOut: True","WiFi: free"],"categories":["Restaurants","Pizza","Chicken Wings","Italian"],"hours":["Monday 11:0-2:0","Tuesday 11:0-2:0","Wednesday 11:0-2:0","Thursday 11:0-3:0","Friday 11:0-3:0","Saturday 11:0-3:0","Sunday 11:0-2:0"],"type":"business"}
嵌套的JSON數(shù)據(jù)可以簡單地保留為表示它的字符串文本——我只想將頂級鍵轉(zhuǎn)換為CSV文件標題。
您可以嘗試一次在一個JSON字典或一個csv行中讀取整個文件,然后將其解析并插入csv,而不是一次讀取和解析整個文件。它將需要更多的手工編碼,但將在文件流樣式下運行良好。
問題是,您的代碼將整個文件讀取到內(nèi)存中,然后在內(nèi)存中創(chuàng)建該文件的近副本。我懷疑它還創(chuàng)建了第三個副本,但還沒有驗證。NEOX建議的解決方案是逐行讀取文件并進行相應(yīng)的處理。這里是for循環(huán)的替換:
1
2
3
4
5
6
7
8for file in os.listdir(jsondir):
csv_file = csvdir + os.path.splitext(file)[0] + '.csv'
with open(jsondir+file, 'r', encoding='utf-8') as f, open(csv_file, 'w', encoding='utf-8') as csv:
header = True
for line in f:
df = pandas.read_json(''.join(('[', line.rstrip(), ']')))
df.to_csv(csv, header=header, index=0, quoting=1)
header = False
我已經(jīng)在Mac上用python 3.5測試過這個;它應(yīng)該在Windows上工作,但我還沒有在那里測試過。
筆記:
我已經(jīng)調(diào)整了您的JSON數(shù)據(jù);第一行的緯度/經(jīng)度似乎有錯誤。
這只是用一個小文件進行了測試;我不確定從何處獲取3.5GB文件。
我假設(shè)這是你朋友的一次性用法。如果這是生產(chǎn)代碼,則需要驗證"with"語句的異常處理是否正確。看看如何在python中使用"with open"打開多個文件?詳情。
這應(yīng)該是相當(dāng)好的表現(xiàn),但我還是不確定從哪里得到你的大文件。
查看ijson,它使流式處理JSON文件像使用Python迭代器一樣簡單。
@凱文:問:為什么你的to_csv()不包括mode='a'參數(shù)?在一個with open中調(diào)用to_csv()會自動附加嗎?而且,您的代碼工作得很好——轉(zhuǎn)換一個小文件需要更長的時間,但是我的計算機不再凍結(jié),而且工作仍然在合理的時間內(nèi)完成(應(yīng)該很容易在一天結(jié)束之前完成),所以我可以讓它在后臺運行。非常感謝。(最后,我編輯了您的代碼,以便在輸出文件中包含UTF-8編碼——在執(zhí)行此操作之前,我一直在獲取帶有外部輸入數(shù)據(jù)的錯誤。)
@ K。很高興我能幫忙!由于csv已經(jīng)打開,傳遞到to_csv()中,所以to_csv()在寫入后不關(guān)閉文件。您可以通過查看源代碼中的def save()來驗證;它將close = False設(shè)置為第1476行。github.com/pandas dev/pandas/blob/master/pandas/formats/…。好問題!
總結(jié)
以上是生活随笔為你收集整理的python csv 大文件_Python性能调优:JSON到CSV,大文件的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: python安装sqlite3_Pyth
- 下一篇: python 任务计划_使用Python