df = pd.read_csv('somefile.csv')

...给出一个错误:

熊猫…/网站/ / io / parsers.py: 1130: DtypeWarning:列(4,5,7,16)为混合类型。指定dtype 选项导入或设置low_memory=False。

为什么dtype选项与low_memory相关,为什么low_memory=False帮助?


当前回答

这对我很管用!

file = pd.read_csv('example.csv', engine='python')

其他回答

正如fireynx前面提到的,如果显式指定了dtype,并且存在与该dtype不兼容的混合数据,则加载将崩溃。我使用了这样的转换器作为变通方法来更改数据类型不兼容的值,这样数据仍然可以加载。

def conv(val):
    if not val:
        return 0    
    try:
        return np.float64(val)
    except:        
        return np.float64(0)

df = pd.read_csv(csv_file,converters={'COL_A':conv,'COL_B':conv})

在处理一个巨大的csv文件(600万行)时,我也遇到过类似的问题。我有三个问题:

文件包含奇怪字符(使用编码修复) 未指定数据类型(使用dtype属性修复) 使用上面的方法,我仍然面临一个问题,这与无法基于文件名定义的file_format有关(使用try ..除了. .)

    df = pd.read_csv(csv_file,sep=';', encoding = 'ISO-8859-1',
                     names=['permission','owner_name','group_name','size','ctime','mtime','atime','filename','full_filename'],
                     dtype={'permission':str,'owner_name':str,'group_name':str,'size':str,'ctime':object,'mtime':object,'atime':object,'filename':str,'full_filename':str,'first_date':object,'last_date':object})
    
    try:
        df['file_format'] = [Path(f).suffix[1:] for f in df.filename.tolist()]
    except:
        df['file_format'] = ''

根据Jerald Achaibar给出的答案,我们可以检测混合Dytpes警告,并且只在警告发生时使用较慢的python引擎:

import warnings

# Force mixed datatype warning to be a python error so we can catch it and reattempt the 
# load using the slower python engine
warnings.simplefilter('error', pandas.errors.DtypeWarning)
try:
    df = pandas.read_csv(path, sep=sep, encoding=encoding)
except pandas.errors.DtypeWarning:
    df = pandas.read_csv(path, sep=sep, encoding=encoding, engine="python")

根据pandas文档,只要engine='c'(这是默认值),指定low_memory=False是这个问题的合理解决方案。

如果low_memory=False,则将首先读入整个列,然后确定正确的类型。例如,列将根据需要保存为对象(字符串)以保存信息。

If low_memory=True (the default), then pandas reads in the data in chunks of rows, then appends them together. Then some of the columns might look like chunks of integers and strings mixed up, depending on whether during the chunk pandas encountered anything that couldn't be cast to integer (say). This could cause problems later. The warning is telling you that this happened at least once in the read in, so you should be careful. Setting low_memory=False will use more memory but will avoid the problem.

就我个人而言,我认为low_memory=True是一个糟糕的默认值,但我工作的领域使用的小数据集比大数据集多得多,所以便利性比效率更重要。

下面的代码演示了一个示例,其中设置了low_memory=True,并且包含混合类型的列。它建立在@ fireynx的答案基础上

import pandas as pd
try:
    from StringIO import StringIO
except ImportError:
    from io import StringIO

# make a big csv data file, following earlier approach by @firelynx
csvdata = """1,Alice
2,Bob
3,Caesar
"""

# we have to replicate the "integer column" user_id many many times to get
# pd.read_csv to actually chunk read. otherwise it just reads 
# the whole thing in one chunk, because it's faster, and we don't get any 
# "mixed dtype" issue. the 100000 below was chosen by experimentation.
csvdatafull = ""
for i in range(100000):
    csvdatafull = csvdatafull + csvdata
csvdatafull =  csvdatafull + "foobar,Cthlulu\n"
csvdatafull = "user_id,username\n" + csvdatafull

sio = StringIO(csvdatafull)
# the following line gives me the warning:
    # C:\Users\rdisa\anaconda3\lib\site-packages\IPython\core\interactiveshell.py:3072: DtypeWarning: Columns (0) have mixed types.Specify dtype option on import or set low_memory=False.
    # interactivity=interactivity, compiler=compiler, result=result)
# but it does not always give me the warning, so i guess the internal workings of read_csv depend on background factors
x = pd.read_csv(sio, low_memory=True) #, dtype={"user_id": int, "username": "string"})

x.dtypes
# this gives:
# Out[69]: 
# user_id     object
# username    object
# dtype: object

type(x['user_id'].iloc[0]) # int
type(x['user_id'].iloc[1]) # int
type(x['user_id'].iloc[2]) # int
type(x['user_id'].iloc[10000]) # int
type(x['user_id'].iloc[299999]) # str !!!! (even though it's a number! so this chunk must have been read in as strings)
type(x['user_id'].iloc[300000]) # str !!!!!

旁白:举个例子说明这是一个问题(也是我第一次遇到这个严重问题的地方),假设你在一个文件上运行了pd.read_csv(),然后想要根据一个标识符删除副本。比如标识符有时是数字,有时是字符串。一行可能是“81287”,另一行可能是“97324-32”。不过,它们是唯一的标识。

如果使用low_memory=True, pandas可能会像这样读取标识符列:

81287
81287
81287
81287
81287
"81287"
"81287"
"81287"
"81287"
"97324-32"
"97324-32"
"97324-32"
"97324-32"
"97324-32"

因为它把东西分成很多块,有时标识符81287是数字,有时是字符串。当我试图基于此删除副本时,

81287 == "81287"
Out[98]: False

这对我很管用!

dashboard_df = pd.read_csv(p_file, sep=';', error_bad_lines=False, index_col=False, dtype='unicode')