日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程语言 > python >内容正文

python

Python数据分析学习笔记之Pandas入门

發布時間:2025/4/5 python 26 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Python数据分析学习笔记之Pandas入门 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

pandas(Python data analysis)是一個Python數據分析的開源庫。
pandas兩種數據結構:DataFrame和Series

安裝:pandas依賴于NumPy,python-dateutil,pytz

pip install pandas

DataFrame

DataFrame是一種帶標簽的二維對象。與excel表格或關系數據庫中的表非常神似。可以用以下方式來創建DataFrame:

  • 從另一個DataFrame來創建DataFrame

  • 從具有二維形狀的NumPy數組或者數組的復合結構來生成DataFrame

  • 可以用Series來創建DataFrame

  • DataFrame可以從類似CSV之類的文件來生成

準備數據資料:http://www.exporedata.net/Dow... 下載一個csv數據文件。

from pandas.io.parsers import read_csvdf = read_csv("WHO_first9cols.csv") print "Dataframe", df print "Shape", df.shape print "Length", len(df) print "Column Headers", df.columns print "Data types", df.dtypes print "Index", df.index print "Values", df.values

注意:DataFrame帶有一個索引,類似于關系數據庫中的主鍵。我們既可以手動創建,也可以自動創建。訪問df.index
如果需要遍歷數據,請使用df.values獲取所有值,非數字的數值在被輸出時標記為nan。

Series

Series是一個由不同類型元素組成的一維數組,該數據結構也具有標簽。可以通過以下方式創建Series數據結構:

  • 由Python字典來創建

  • 由NumPy數組來創建

  • 由單個標量值來創建

創建Series數據結構時,可以向構造函數遞交一組軸標簽,這些標簽通常稱為索引。
對DataFrame列執行查詢操作時,會返回一個Series

from pandas.io.parsers import read_csv import numpy as npdf = read_csv("WHO_first9cols.csv") #這里對DataFrame列進行查詢操作,返回一個Series country_col = df["Country"] print "Type df", type(df) print "Type country col", type(country_col)print "Series shape", country_col.shape print "Series index", country_col.index print "Series values", country_col.values print "Series name", country_col.nameprint "Last 2 countries", country_col[-2:] print "Last 2 countries type", type(country_col[-2:]) #NumPy的函數同樣適用于pandas的DataFrame和Series print "df signs", np.sign(df) last_col = df.columns[-1] print "Last df column signs", last_col, np.sign(df[last_col])print np.sum(df[last_col] - df[last_col].values)

利用pandas查詢數據

數據準備:pip install Quandl 或者手動從http://www.quandl.com/SIDC/SU... 下載csv文件。

import Quandl# Data from http://www.quandl.com/SIDC/SUNSPOTS_A-Sunspot-Numbers-Annual # PyPi url https://pypi.python.org/pypi/Quandl sunspots = Quandl.get("SIDC/SUNSPOTS_A") print "Head 2", sunspots.head(2) print "Tail 2", sunspots.tail(2)last_date = sunspots.index[-1] print "Last value", sunspots.loc[last_date]print "Values slice by date", sunspots["20020101": "20131231"]print "Slice from a list of indices", sunspots.iloc[[2, 4, -4, -2]]print "Scalar with Iloc", sunspots.iloc[0, 0] print "Scalar with iat", sunspots.iat[1, 0]print "Boolean selection", sunspots[sunspots > sunspots.mean()] print "Boolean selection with column label", sunspots[sunspots.Number > sunspots.Number.mean()]

DataFrame的統計函數
describe、count、mad、median、min、max、,pde、std、var、skew、kurt

DataFrame分組與聚合

import pandas as pd from numpy.random import seed from numpy.random import rand from numpy.random import random_integers import numpy as npseed(42)df = pd.DataFrame({'Weather' : ['cold', 'hot', 'cold', 'hot','cold', 'hot', 'cold'],'Food' : ['soup', 'soup', 'icecream', 'chocolate','icecream', 'icecream', 'soup'],'Price' : 10 * rand(7), 'Number' : random_integers(1, 9, size=(7,))})print df weather_group = df.groupby('Weather')i = 0for name, group in weather_group:i = i + 1print "Group", i, nameprint groupprint "Weather group first\n", weather_group.first() print "Weather group last\n", weather_group.last() print "Weather group mean\n", weather_group.mean()wf_group = df.groupby(['Weather', 'Food']) print "WF Groups", wf_group.groups #通過agg方法,可以對數據組施加一系列的NumPy函數。 print "WF Aggregated\n", wf_group.agg([np.mean, np.median])

DataFrame的串聯與附加操作

數據庫的數據表有內部連接和外部連接。DataFrame也有類似操作,即串聯和附加。
函數concat()的作用是串聯DataFrame,追加數據行使用append()函數。
例如

pd.concat([df[:3],df[3:]]) df[:3].append(df[5:])

pandas提供merge()或DataFrane的join()方法都能實現類似數據庫的連接操作功能。默認情況下join()方法會按照索引進行連接,不過,有時候這不符合我們的要求。
數據準備:
tips.csv

EmpNr,Amount 5,10 9,5 7,2.5

dest.csv

EmpNr,Dest 5,The Hague 3,Amsterdam 9,Rotterdam dests = pd.read_csv('dest.csv') tips = pd.read_csv('tips.csv') #使用merge()函數按照員工編號進行連接處理 print "Merge() on key\n", pd.merge(dests, tips, on='EmpNr') #用join()方法執行連接操作時,需要使用后綴來指示左、右操作對象。 print "Dests join() tips\n", dests.join(tips, lsuffix='Dest', rsuffix='Tips') #用merge()執行內部連接時,更顯示的方法如下 print "Inner join with merge()\n", pd.merge(dests, tips, how='inner') #稍作修改便變成完全外部連接,缺失的數據變為NaN print "Outer join\n", pd.merge(dests, tips, how='outer')

處理缺失的數據

缺失的數據變為NaN(非數字),還有一個類似的符號NaT(非日期). 可以使用pandas的兩個函數來進行判斷isnull(),notnull(), fillna()方法可以用一個標量值來替換缺失的數據。

import pandas as pd import numpy as npdf = pd.read_csv('WHO_first9cols.csv') # Select first 3 rows of country and Net primary school enrolment ratio male (%) df = df[['Country', df.columns[-2]]][:2] print "New df\n", df print "Null Values\n", pd.isnull(df) print "Total Null Values\n", pd.isnull(df).sum() print "Not Null Values\n", df.notnull() print "Last Column Doubled\n", 2 * df[df.columns[-1]] print "Last Column plus NaN\n", df[df.columns[-1]] + np.nan print "Zero filled\n", df.fillna(0)

處理日期數據

http://pandas.pydata.org/pand...
各種頻率(freq)短碼對照表:

  • B business day frequency

  • C custom business day frequency (experimental)

  • D calendar day frequency

  • W weekly frequency

  • M month end frequency

  • SM semi-month end frequency (15th and end of month)

  • BM business month end frequency

  • CBM custom business month end frequency

  • MS month start frequency

  • SMS semi-month start frequency (1st and 15th)

  • BMS business month start frequency

  • CBMS custom business month start frequency

  • Q quarter end frequency

  • BQ business quarter endfrequency

  • QS quarter start frequency

  • BQS business quarter start frequency

  • A year end frequency

  • BA business year end frequency

  • AS year start frequency

  • BAS business year start frequency

  • BH business hour frequency

  • H hourly frequency

  • T, min minutely frequency

  • S secondly frequency

  • L, ms milliseconds

  • U, us microseconds

  • N nanoseconds

import pandas as pd from pandas.tseries.offsets import DateOffset import sysprint "Date range", pd.date_range('1/1/1900', periods=42, freq='D')try:print "Date range", pd.date_range('1/1/1677', periods=4, freq='D') except:etype, value, _ = sys.exc_info()print "Error encountered", etype, valueoffset = DateOffset(seconds=2 ** 63/10 ** 9) mid = pd.to_datetime('1/1/1970') print "Start valid range", mid - offset print "End valid range", mid + offset print pd.to_datetime(['1900/1/1', '1901.12.11'])print "With format", pd.to_datetime(['19021112', '19031230'], format='%Y%m%d')print "Illegal date", pd.to_datetime(['1902-11-12', 'not a date']) print "Illegal date coerced", pd.to_datetime(['1902-11-12', 'not a date'], coerce=True)

據透視表(pivot_table)

數據透視表可以用來匯總數據。pivot_table()函數及相應的DataFrame方法。

import pandas as pd from numpy.random import seed from numpy.random import rand from numpy.random import random_integers import numpy as npseed(42) N = 7 df = pd.DataFrame({'Weather' : ['cold', 'hot', 'cold', 'hot','cold', 'hot', 'cold'],'Food' : ['soup', 'soup', 'icecream', 'chocolate','icecream', 'icecream', 'soup'],'Price' : 10 * rand(N), 'Number' : random_integers(1, 9, size=(N,))})print "DataFrame\n", df #cols指定需要聚合的列,aggfunc指定聚合函數。 print pd.pivot_table(df, cols=['Food'], aggfunc=np.sum)

總結

以上是生活随笔為你收集整理的Python数据分析学习笔记之Pandas入门的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。