现在,每次运行脚本时,我都会导入一个相当大的CSV作为数据框架。是否有一个好的解决方案来保持数据帧在运行之间不断可用,这样我就不必花费所有的时间等待脚本运行?
当前回答
Pandas DataFrame有to_pickle函数,这对于保存DataFrame非常有用:
import pandas as pd
a = pd.DataFrame({'A':[0,1,0,1,0],'B':[True, True, False, False, False]})
print a
# A B
# 0 0 True
# 1 1 True
# 2 0 False
# 3 1 False
# 4 0 False
a.to_pickle('my_file.pkl')
b = pd.read_pickle('my_file.pkl')
print b
# A B
# 0 0 True
# 1 1 True
# 2 0 False
# 3 1 False
# 4 0 False
其他回答
Arctic是一个高性能的Pandas, numpy和其他数值数据的数据存储。它位于MongoDB之上。也许对于OP来说有点过分了,但对于其他无意中看到这篇文章的人来说,值得一提
泡菜很好!
import pandas as pd
df.to_pickle('123.pkl') #to save the dataframe, df to 123.pkl
df1 = pd.read_pickle('123.pkl') #to load 123.pkl back to the dataframe df
to_pickle()的另一个非常新鲜的测试。
我总共有25个.csv文件要处理,最终的数据框架由大约2M项组成。
(注意:除了加载.csv文件,我还操作了一些数据,并通过新列扩展数据帧。)
浏览所有25个.csv文件并创建dataframe大约需要14秒。
从pkl文件加载整个数据帧的时间不到1秒
Pandas DataFrame有to_pickle函数,这对于保存DataFrame非常有用:
import pandas as pd
a = pd.DataFrame({'A':[0,1,0,1,0],'B':[True, True, False, False, False]})
print a
# A B
# 0 0 True
# 1 1 True
# 2 0 False
# 3 1 False
# 4 0 False
a.to_pickle('my_file.pkl')
b = pd.read_pickle('my_file.pkl')
print b
# A B
# 0 0 True
# 1 1 True
# 2 0 False
# 3 1 False
# 4 0 False
Pyarrow跨版本兼容性
总的来说,pyarrow/feather(来自pandas/msgpack的弃用警告)。然而,我有一个挑战与pyarrow的瞬态在规范中的数据序列化pyarrow 0.15.1不能反序列化与0.16.0 ARROW-7961。我使用序列化使用redis,所以必须使用二进制编码。
我重新测试了各种选择(使用jupyter笔记本电脑)
import sys, pickle, zlib, warnings, io
class foocls:
def pyarrow(out): return pa.serialize(out).to_buffer().to_pybytes()
def msgpack(out): return out.to_msgpack()
def pickle(out): return pickle.dumps(out)
def feather(out): return out.to_feather(io.BytesIO())
def parquet(out): return out.to_parquet(io.BytesIO())
warnings.filterwarnings("ignore")
for c in foocls.__dict__.values():
sbreak = True
try:
c(out)
print(c.__name__, "before serialization", sys.getsizeof(out))
print(c.__name__, sys.getsizeof(c(out)))
%timeit -n 50 c(out)
print(c.__name__, "zlib", sys.getsizeof(zlib.compress(c(out))))
%timeit -n 50 zlib.compress(c(out))
except TypeError as e:
if "not callable" in str(e): sbreak = False
else: raise
except (ValueError) as e: print(c.__name__, "ERROR", e)
finally:
if sbreak: print("=+=" * 30)
warnings.filterwarnings("default")
对我的数据帧(在出jupyter变量)具有以下结果
pyarrow before serialization 533366
pyarrow 120805
1.03 ms ± 43.9 µs per loop (mean ± std. dev. of 7 runs, 50 loops each)
pyarrow zlib 20517
2.78 ms ± 81.8 µs per loop (mean ± std. dev. of 7 runs, 50 loops each)
=+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+=
msgpack before serialization 533366
msgpack 109039
1.74 ms ± 72.8 µs per loop (mean ± std. dev. of 7 runs, 50 loops each)
msgpack zlib 16639
3.05 ms ± 71.7 µs per loop (mean ± std. dev. of 7 runs, 50 loops each)
=+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+=
pickle before serialization 533366
pickle 142121
733 µs ± 38.3 µs per loop (mean ± std. dev. of 7 runs, 50 loops each)
pickle zlib 29477
3.81 ms ± 60.4 µs per loop (mean ± std. dev. of 7 runs, 50 loops each)
=+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+=
feather ERROR feather does not support serializing a non-default index for the index; you can .reset_index() to make the index into column(s)
=+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+=
parquet ERROR Nested column branch had multiple children: struct<x: double, y: double>
=+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+==+=
羽毛和拼花不适合我的数据框架。我将继续使用pyarrow。但是我会补充腌黄瓜(没有压缩)。写入缓存时,存储pyarrow和pickle序列化表单。如果pyarrow反序列化失败,则从缓存读取回退到pickle时。
推荐文章
- 数据类vs类型。NamedTuple主要用例
- 如何从macOS完全卸载蟒蛇
- 是否有可能键入提示一个lambda函数?
- 'dict'对象没有has_key属性
- 使用Pandas groupby连接来自几行的字符串
- Pandas:给定列的数据帧行之和
- 如何避免在为Python项目构建Docker映像时重新安装包?
- 如何激活蟒蛇环境
- 省略[…]意思是在一个列表里?
- 将数据帧行按向量按特定顺序排序
- 为什么我得到“'str'对象没有属性'读取'”当尝试使用' json。载入字符串?
- 不区分大小写的列表排序,没有降低结果?
- 排序后的语法(key=lambda:…)
- 在烧瓶中返回HTTP状态代码201
- 使用python创建一个简单的XML文件