今天,我非常惊讶地发现,当从数据文件读取数据时(例如),熊猫能够识别值的类型:
df = pandas.read_csv('test.dat', delimiter=r"\s+", names=['col1','col2','col3'])
例如,可以这样检查:
for i, r in df.iterrows():
print type(r['col1']), type(r['col2']), type(r['col3'])
特别是整数、浮点数和字符串被正确识别。但是,我有一列的日期格式如下:2013-6-4。这些日期被识别为字符串(而不是python date-objects)。
也许自从@Rutger回答之后,pandas接口已经改变了,但在我使用的版本(0.15.2)中,date_parser函数接收的是日期列表,而不是单个值。在这种情况下,他的代码应该像这样更新:
from datetime import datetime
import pandas as pd
dateparse = lambda dates: [datetime.strptime(d, '%Y-%m-%d %H:%M:%S') for d in dates]
df = pd.read_csv('test.dat', parse_dates=['datetime'], date_parser=dateparse)
由于最初的提问者说他想要日期,而日期是2013-6-4格式,dateparse函数应该是:
dateparse = lambda dates: [datetime.strptime(d, '%Y-%m-%d').date() for d in dates]
加载csv文件时包含日期列。我们有两种方法来制作熊猫
识别日期列,即
熊猫显式识别格式通过arg date_parser=mydateparser
Pandas隐式识别agr infer_datetime_format=True的格式
一些日期列数据
01/01/18
01/02/18
这里我们不知道前两件事,可能是月,也可能是日。在这种情况下,我们要用
方法1:
显式传递格式
mydateparser = lambda x: pd.datetime.strptime(x, "%m/%d/%y")
df = pd.read_csv(file_name, parse_dates=['date_col_name'],
date_parser=mydateparser)
方法2:—隐式或自动识别格式
df = pd.read_csv(file_name, parse_dates=[date_col_name],infer_datetime_format=True)
除了其他回复所说的,如果必须解析具有数十万个时间戳的非常大的文件,date_parser可能会成为一个巨大的性能瓶颈,因为它是一个每行调用一次的Python函数。您可以通过在解析CSV文件时将日期保存为文本,然后将整个列一次性转换为日期来获得相当大的性能改进:
# For a data column
df = pd.read_csv(infile, parse_dates={'mydatetime': ['date', 'time']})
df['mydatetime'] = pd.to_datetime(df['mydatetime'], exact=True, cache=True, format='%Y-%m-%d %H:%M:%S')
# For a DateTimeIndex
df = pd.read_csv(infile, parse_dates={'mydatetime': ['date', 'time']}, index_col='mydatetime')
df.index = pd.to_datetime(df.index, exact=True, cache=True, format='%Y-%m-%d %H:%M:%S')
# For a MultiIndex
df = pd.read_csv(infile, parse_dates={'mydatetime': ['date', 'time']}, index_col=['mydatetime', 'num'])
idx_mydatetime = df.index.get_level_values(0)
idx_num = df.index.get_level_values(1)
idx_mydatetime = pd.to_datetime(idx_mydatetime, exact=True, cache=True, format='%Y-%m-%d %H:%M:%S')
df.index = pd.MultiIndex.from_arrays([idx_mydatetime, idx_num])
在我的用例中,一个文件有200k行(每行一个时间戳),这将处理时间从大约一分钟缩短到不到一秒。
Pandas read_csv方法非常适合解析日期。完整的文档请访问http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.parsers.read_csv.html
你甚至可以在不同的列中有不同的日期部分,并传递参数:
parse_dates : boolean, list of ints or names, list of lists, or dict
If True -> try parsing the index. If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a
separate date column. If [[1, 3]] -> combine columns 1 and 3 and parse as a single date
column. {‘foo’ : [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’
The default sensing of dates works great, but it seems to be biased towards north american Date formats. If you live elsewhere you might occasionally be caught by the results. As far as I can remember 1/6/2000 means 6 January in the USA as opposed to 1 Jun where I live. It is smart enough to swing them around if dates like 23/6/2000 are used. Probably safer to stay with YYYYMMDD variations of date though. Apologies to pandas developers,here but i have not tested it with local dates recently.
可以使用date_parser参数传递一个函数来转换格式。
date_parser : function
Function to use for converting a sequence of string columns to an array of datetime
instances. The default uses dateutil.parser.parser to do the conversion.