我想对数据框架中的财务数据按顺序执行自己的复杂操作。
例如,我正在使用以下来自雅虎财经的MSFT CSV文件:
Date,Open,High,Low,Close,Volume,Adj Close
2011-10-19,27.37,27.47,27.01,27.13,42880000,27.13
2011-10-18,26.94,27.40,26.80,27.31,52487900,27.31
2011-10-17,27.11,27.42,26.85,26.98,39433400,26.98
2011-10-14,27.31,27.50,27.02,27.27,50947700,27.27
....
然后我做以下事情:
#!/usr/bin/env python
from pandas import *
df = read_csv('table.csv')
for i, row in enumerate(df.values):
date = df.index[i]
open, high, low, close, adjclose = row
#now perform analysis on open/close based on date, etc..
这是最有效的方法吗?考虑到在pandas中对速度的关注,我认为必须有一些特殊的函数以一种也检索索引的方式遍历值(可能通过生成器来提高内存效率)?df。不幸的是Iteritems只逐列迭代。
我相信循环dataframe最简单有效的方法是使用numpy和numba。在这种情况下,循环在许多情况下可以近似地与向量化操作一样快。如果numba不是一个选项,那么普通numpy可能是下一个最佳选项。正如已经多次提到的,您的默认值应该是向量化的,但是这个答案仅仅考虑了有效的循环,无论出于什么原因决定了循环。
对于测试用例,让我们使用@DSM回答的计算百分比变化的示例。这是一个非常简单的情况,作为一个实际问题,你不会写一个循环来计算它,但这样它为时间向量化方法和循环提供了一个合理的基线。
让我们用一个小的DataFrame来设置这4种方法,下面我们将在一个更大的数据集上对它们进行计时。
import pandas as pd
import numpy as np
import numba as nb
df = pd.DataFrame( { 'close':[100,105,95,105] } )
pandas_vectorized = df.close.pct_change()[1:]
x = df.close.to_numpy()
numpy_vectorized = ( x[1:] - x[:-1] ) / x[:-1]
def test_numpy(x):
pct_chng = np.zeros(len(x))
for i in range(1,len(x)):
pct_chng[i] = ( x[i] - x[i-1] ) / x[i-1]
return pct_chng
numpy_loop = test_numpy(df.close.to_numpy())[1:]
@nb.jit(nopython=True)
def test_numba(x):
pct_chng = np.zeros(len(x))
for i in range(1,len(x)):
pct_chng[i] = ( x[i] - x[i-1] ) / x[i-1]
return pct_chng
numba_loop = test_numba(df.close.to_numpy())[1:]
下面是100,000行的DataFrame上的计时(使用Jupyter的%timeit函数执行的计时,为了便于阅读,折叠成摘要表):
pandas/vectorized 1,130 micro-seconds
numpy/vectorized 382 micro-seconds
numpy/looped 72,800 micro-seconds
numba/looped 455 micro-seconds
总结:对于简单的情况,比如这个例子,为了简单和可读性,可以使用(向量化的)pandas,为了速度,可以使用(向量化的)numpy。如果您确实需要使用循环,请使用numpy。如果numba可用,可以将其与numpy结合使用以获得更高的速度。在这种情况下,numpy + numba几乎和向量化numpy代码一样快。
其他细节:
Not shown are various options like iterrows, itertuples, etc. which are orders of magnitude slower and really should never be used.
The timings here are fairly typical: numpy is faster than pandas and vectorized is faster than loops, but adding numba to numpy will often speed numpy up dramatically.
Everything except the pandas option requires converting the DataFrame column to a numpy array. That conversion is included in the timings.
The time to define/compile the numpy/numba functions was not included in the timings, but would generally be a negligible component of the timing for any large dataframe.
我相信循环dataframe最简单有效的方法是使用numpy和numba。在这种情况下,循环在许多情况下可以近似地与向量化操作一样快。如果numba不是一个选项,那么普通numpy可能是下一个最佳选项。正如已经多次提到的,您的默认值应该是向量化的,但是这个答案仅仅考虑了有效的循环,无论出于什么原因决定了循环。
对于测试用例,让我们使用@DSM回答的计算百分比变化的示例。这是一个非常简单的情况,作为一个实际问题,你不会写一个循环来计算它,但这样它为时间向量化方法和循环提供了一个合理的基线。
让我们用一个小的DataFrame来设置这4种方法,下面我们将在一个更大的数据集上对它们进行计时。
import pandas as pd
import numpy as np
import numba as nb
df = pd.DataFrame( { 'close':[100,105,95,105] } )
pandas_vectorized = df.close.pct_change()[1:]
x = df.close.to_numpy()
numpy_vectorized = ( x[1:] - x[:-1] ) / x[:-1]
def test_numpy(x):
pct_chng = np.zeros(len(x))
for i in range(1,len(x)):
pct_chng[i] = ( x[i] - x[i-1] ) / x[i-1]
return pct_chng
numpy_loop = test_numpy(df.close.to_numpy())[1:]
@nb.jit(nopython=True)
def test_numba(x):
pct_chng = np.zeros(len(x))
for i in range(1,len(x)):
pct_chng[i] = ( x[i] - x[i-1] ) / x[i-1]
return pct_chng
numba_loop = test_numba(df.close.to_numpy())[1:]
下面是100,000行的DataFrame上的计时(使用Jupyter的%timeit函数执行的计时,为了便于阅读,折叠成摘要表):
pandas/vectorized 1,130 micro-seconds
numpy/vectorized 382 micro-seconds
numpy/looped 72,800 micro-seconds
numba/looped 455 micro-seconds
总结:对于简单的情况,比如这个例子,为了简单和可读性,可以使用(向量化的)pandas,为了速度,可以使用(向量化的)numpy。如果您确实需要使用循环,请使用numpy。如果numba可用,可以将其与numpy结合使用以获得更高的速度。在这种情况下,numpy + numba几乎和向量化numpy代码一样快。
其他细节:
Not shown are various options like iterrows, itertuples, etc. which are orders of magnitude slower and really should never be used.
The timings here are fairly typical: numpy is faster than pandas and vectorized is faster than loops, but adding numba to numpy will often speed numpy up dramatically.
Everything except the pandas option requires converting the DataFrame column to a numpy array. That conversion is included in the timings.
The time to define/compile the numpy/numba functions was not included in the timings, but would generally be a negligible component of the timing for any large dataframe.
就像之前提到的,pandas对象在一次处理整个数组时是最有效的。然而,对于那些真正需要通过pandas DataFrame循环来执行某些事情的人,比如我,我发现至少有三种方法可以做到这一点。我做了一个简短的测试,看看三种方法中哪一种最省时。
t = pd.DataFrame({'a': range(0, 10000), 'b': range(10000, 20000)})
B = []
C = []
A = time.time()
for i,r in t.iterrows():
C.append((r['a'], r['b']))
B.append(time.time()-A)
C = []
A = time.time()
for ir in t.itertuples():
C.append((ir[1], ir[2]))
B.append(time.time()-A)
C = []
A = time.time()
for r in zip(t['a'], t['b']):
C.append((r[0], r[1]))
B.append(time.time()-A)
print B
结果:
[0.5639059543609619, 0.017839908599853516, 0.005645036697387695]
这可能不是衡量时间消耗的最佳方法,但对我来说很快。
以下是我个人认为的一些优点和缺点:
.iterrows():在单独的变量中返回索引和行项,但速度明显较慢
.itertuples():比.iterrows()快,但返回索引和行项,ir[0]是索引
Zip:最快,但不能访问行的索引
编辑2020/11/10
为了它的价值,这里是一个更新的基准测试与其他一些替代方案(性能与MacBookPro 2,4 GHz英特尔酷睿i9 8核32 Go 2667 MHz DDR4)
import sys
import tqdm
import time
import pandas as pd
B = []
t = pd.DataFrame({'a': range(0, 10000), 'b': range(10000, 20000)})
for _ in tqdm.tqdm(range(10)):
C = []
A = time.time()
for i,r in t.iterrows():
C.append((r['a'], r['b']))
B.append({"method": "iterrows", "time": time.time()-A})
C = []
A = time.time()
for ir in t.itertuples():
C.append((ir[1], ir[2]))
B.append({"method": "itertuples", "time": time.time()-A})
C = []
A = time.time()
for r in zip(t['a'], t['b']):
C.append((r[0], r[1]))
B.append({"method": "zip", "time": time.time()-A})
C = []
A = time.time()
for r in zip(*t.to_dict("list").values()):
C.append((r[0], r[1]))
B.append({"method": "zip + to_dict('list')", "time": time.time()-A})
C = []
A = time.time()
for r in t.to_dict("records"):
C.append((r["a"], r["b"]))
B.append({"method": "to_dict('records')", "time": time.time()-A})
A = time.time()
t.agg(tuple, axis=1).tolist()
B.append({"method": "agg", "time": time.time()-A})
A = time.time()
t.apply(tuple, axis=1).tolist()
B.append({"method": "apply", "time": time.time()-A})
print(f'Python {sys.version} on {sys.platform}')
print(f"Pandas version {pd.__version__}")
print(
pd.DataFrame(B).groupby("method").agg(["mean", "std"]).xs("time", axis=1).sort_values("mean")
)
## Output
Python 3.7.9 (default, Oct 13 2020, 10:58:24)
[Clang 12.0.0 (clang-1200.0.32.2)] on darwin
Pandas version 1.1.4
mean std
method
zip + to_dict('list') 0.002353 0.000168
zip 0.003381 0.000250
itertuples 0.007659 0.000728
to_dict('records') 0.025838 0.001458
agg 0.066391 0.007044
apply 0.067753 0.006997
iterrows 0.647215 0.019600