我试图读取一个大的csv文件(aprox。6 GB)在熊猫和我得到一个内存错误:
MemoryError Traceback (most recent call last)
<ipython-input-58-67a72687871b> in <module>()
----> 1 data=pd.read_csv('aphro.csv',sep=';')
...
MemoryError:
有什么帮助吗?
我试图读取一个大的csv文件(aprox。6 GB)在熊猫和我得到一个内存错误:
MemoryError Traceback (most recent call last)
<ipython-input-58-67a72687871b> in <module>()
----> 1 data=pd.read_csv('aphro.csv',sep=';')
...
MemoryError:
有什么帮助吗?
当前回答
分块不应该总是解决这个问题的第一步。
Is the file large due to repeated non-numeric data or unwanted columns? If so, you can sometimes see massive memory savings by reading in columns as categories and selecting required columns via pd.read_csv usecols parameter. Does your workflow require slicing, manipulating, exporting? If so, you can use dask.dataframe to slice, perform your calculations and export iteratively. Chunking is performed silently by dask, which also supports a subset of pandas API. If all else fails, read line by line via chunks. Chunk via pandas or via csv library as a last resort.
其他回答
对于大数据,我建议你使用"dask"库,例如:
# Dataframes implement the Pandas API
import dask.dataframe as dd
df = dd.read_csv('s3://.../2018-*-*.csv')
你可以在这里阅读更多的文档。
另一个很好的选择是使用modin,因为所有的功能都与pandas相同,但它利用了分布式数据框架库,如dask。
在我的项目中,另一个高级库是数据表。
# Datatable python library
import datatable as dt
df = dt.fread("s3://.../2018-*-*.csv")
在使用chunksize选项之前,如果你想确定你想要在@unutbu提到的分块for循环中写入的进程函数,你可以简单地使用nrows选项。
small_df = pd.read_csv(filename, nrows=100)
一旦确定流程块准备好了,就可以将其放入整个数据帧的分块for循环中。
该错误表明机器没有足够的内存来读取整个 CSV一次转换成一个数据帧。假设您不需要整个数据集 内存,避免这个问题的一种方法是处理CSV在 Chunks(通过指定chunksize参数):
chunksize = 10 ** 6
for chunk in pd.read_csv(filename, chunksize=chunksize):
process(chunk)
chunksize参数指定每个块的行数。 (当然,最后一个块可能包含少于块大小的行。)
熊猫>= 1.2
Read_csv with chunksize返回一个上下文管理器,像这样使用:
chunksize = 10 ** 6
with pd.read_csv(filename, chunksize=chunksize) as reader:
for chunk in reader:
process(chunk)
参见 GH38225
分块不应该总是解决这个问题的第一步。
Is the file large due to repeated non-numeric data or unwanted columns? If so, you can sometimes see massive memory savings by reading in columns as categories and selecting required columns via pd.read_csv usecols parameter. Does your workflow require slicing, manipulating, exporting? If so, you can use dask.dataframe to slice, perform your calculations and export iteratively. Chunking is performed silently by dask, which also supports a subset of pandas API. If all else fails, read line by line via chunks. Chunk via pandas or via csv library as a last resort.
我是这样说的:
chunks=pd.read_table('aphro.csv',chunksize=1000000,sep=';',\
names=['lat','long','rf','date','slno'],index_col='slno',\
header=None,parse_dates=['date'])
df=pd.DataFrame()
%time df=pd.concat(chunk.groupby(['lat','long',chunk['date'].map(lambda x: x.year)])['rf'].agg(['sum']) for chunk in chunks)