现在,每次运行脚本时,我都会导入一个相当大的CSV作为数据框架。是否有一个好的解决方案来保持数据帧在运行之间不断可用,这样我就不必花费所有的时间等待脚本运行?


当前回答

这里有很多很棒和充分的答案,但我想发布一个我在Kaggle上使用的测试,这个测试用不同的pandas兼容格式保存和读取大df:

https://www.kaggle.com/pedrocouto39/fast-reading-w-pickle-feather-parquet-jay

我不是作者,也不是作者的朋友,然而,当我读到这个问题时,我觉得值得一提。

CSV: 1分42秒泡菜:4.45秒羽毛:4.35秒拼花:8.31秒杰伦:8.12毫秒 或者0.0812秒(超快的!)

其他回答

泡菜很好!

import pandas as pd
df.to_pickle('123.pkl')    #to save the dataframe, df to 123.pkl
df1 = pd.read_pickle('123.pkl') #to load 123.pkl back to the dataframe df

如前所述,有不同的选项和文件格式(HDF5, JSON, CSV, parquet, SQL)来存储数据帧。然而,pickle不是一级公民(取决于你的设置),因为:

泡菜是一个潜在的安全隐患。形成pickle的Python文档:

警告pickle模块不安全 恶意构造的数据。对象接收的数据永远不能解pickle 不受信任或未经身份验证的源。

泡菜很慢。找到这里和这里的基准。

根据您的设置/使用情况,这两个限制都不适用,但我不建议将pickle作为pandas数据帧的默认持久性。

Arctic是一个高性能的Pandas, numpy和其他数值数据的数据存储。它位于MongoDB之上。也许对于OP来说有点过分了,但对于其他无意中看到这篇文章的人来说,值得一提

Pyarrow跨版本兼容性

总的来说,pyarrow/feather(来自pandas/msgpack的弃用警告)。然而,我有一个挑战与pyarrow的瞬态在规范中的数据序列化pyarrow 0.15.1不能反序列化与0.16.0 ARROW-7961。我使用序列化使用redis,所以必须使用二进制编码。

我重新测试了各种选择(使用jupyter笔记本电脑)

import sys, pickle, zlib, warnings, io
class foocls:
    def pyarrow(out): return pa.serialize(out).to_buffer().to_pybytes()
    def msgpack(out): return out.to_msgpack()
    def pickle(out): return pickle.dumps(out)
    def feather(out): return out.to_feather(io.BytesIO())
    def parquet(out): return out.to_parquet(io.BytesIO())

warnings.filterwarnings("ignore")
for c in foocls.__dict__.values():
    sbreak = True
    try:
        c(out)
        print(c.__name__, "before serialization", sys.getsizeof(out))
        print(c.__name__, sys.getsizeof(c(out)))
        %timeit -n 50 c(out)
        print(c.__name__, "zlib", sys.getsizeof(zlib.compress(c(out))))
        %timeit -n 50 zlib.compress(c(out))
    except TypeError as e:
        if "not callable" in str(e): sbreak = False
        else: raise
    except (ValueError) as e: print(c.__name__, "ERROR", e)
    finally: 
        if sbreak: print("=+=" * 30)        
warnings.filterwarnings("default")

对我的数据帧(在出jupyter变量)具有以下结果

pyarrow before serialization 533366
pyarrow 120805
1.03 ms ± 43.9 µs per loop (mean ± std. dev. of 7 runs, 50 loops each)
pyarrow zlib 20517
2.78 ms ± 81.8 µs per loop (mean ± std. dev. of 7 runs, 50 loops each)
=+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+=
msgpack before serialization 533366
msgpack 109039
1.74 ms ± 72.8 µs per loop (mean ± std. dev. of 7 runs, 50 loops each)
msgpack zlib 16639
3.05 ms ± 71.7 µs per loop (mean ± std. dev. of 7 runs, 50 loops each)
=+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+=
pickle before serialization 533366
pickle 142121
733 µs ± 38.3 µs per loop (mean ± std. dev. of 7 runs, 50 loops each)
pickle zlib 29477
3.81 ms ± 60.4 µs per loop (mean ± std. dev. of 7 runs, 50 loops each)
=+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+=
feather ERROR feather does not support serializing a non-default index for the index; you can .reset_index() to make the index into column(s)
=+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+=
parquet ERROR Nested column branch had multiple children: struct<x: double, y: double>
=+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+=

羽毛和拼花不适合我的数据框架。我将继续使用pyarrow。但是我会补充腌黄瓜(没有压缩)。写入缓存时,存储pyarrow和pickle序列化表单。如果pyarrow反序列化失败,则从缓存读取回退到pickle时。

Pandas DataFrame有to_pickle函数,这对于保存DataFrame非常有用:

import pandas as pd

a = pd.DataFrame({'A':[0,1,0,1,0],'B':[True, True, False, False, False]})
print a
#    A      B
# 0  0   True
# 1  1   True
# 2  0  False
# 3  1  False
# 4  0  False

a.to_pickle('my_file.pkl')

b = pd.read_pickle('my_file.pkl')
print b
#    A      B
# 0  0   True
# 1  1   True
# 2  0  False
# 3  1  False
# 4  0  False