我有数据保存在postgreSQL数据库。我正在使用Python2.7查询这些数据,并将其转换为Pandas DataFrame。但是,这个数据帧的最后一列有一个值字典。DataFrame df看起来是这样的:

Station ID     Pollutants
8809           {"a": "46", "b": "3", "c": "12"}
8810           {"a": "36", "b": "5", "c": "8"}
8811           {"b": "2", "c": "7"}
8812           {"c": "11"}
8813           {"a": "82", "c": "15"}

我需要把这个列分割成单独的列,这样DataFrame ' df2看起来就像这样:

Station ID     a      b       c
8809           46     3       12
8810           36     5       8
8811           NaN    2       7
8812           NaN    NaN     11
8813           82     NaN     15

我遇到的主要问题是列表的长度不一样。但是所有的列表只包含3个相同的值:'a', 'b'和'c'。而且它们总是以相同的顺序出现('a'第一,'b'第二,'c'第三)。

下面的代码用来工作并返回我想要的(df2)。

objs = [df, pandas.DataFrame(df['Pollutant Levels'].tolist()).iloc[:, :3]]
df2 = pandas.concat(objs, axis=1).drop('Pollutant Levels', axis=1)
print(df2)

我刚刚在上周运行了这段代码,它工作得很好。但是现在我的代码坏了,我从行[4]得到这个错误:

IndexError: out-of-bounds on slice (end) 

我没有修改代码,但现在得到了错误。我觉得这是由于我的方法不健全或不恰当。

任何关于如何将这列列表拆分为单独的列的建议或指导将非常感谢!

编辑:我认为.tolist()和.apply方法在我的代码上不起作用,因为它是一个Unicode字符串,即:

#My data format 
u{'a': '1', 'b': '2', 'c': '3'}

#and not
{u'a': '1', u'b': '2', u'c': '3'}

此格式为从postgreSQL数据库导入数据。在这个问题上有什么帮助或想法吗?有没有办法转换Unicode?


当前回答

注意:对于深度为1的字典(一级)

>>> df

   Station ID                        Pollutants
0        8809  {"a": "46", "b": "3", "c": "12"}
1        8810   {"a": "36", "b": "5", "c": "8"}
2        8811              {"b": "2", "c": "7"}
3        8812                       {"c": "11"}
4        8813            {"a": "82", "c": "15"}

对1000万行的大型数据集进行快速比较

>>> df = pd.concat([df]*2000000).reset_index(drop=True)
>>> print(df.shape)
(10000000, 2)
def apply_drop(df):
    return df.join(df['Pollutants'].apply(pd.Series)).drop('Pollutants', axis=1)  

def json_normalise_drop(df):
    return df.join(pd.json_normalize(df.Pollutants)).drop('Pollutants', axis=1)  

def tolist_drop(df):
    return df.join(pd.DataFrame(df['Pollutants'].tolist())).drop('Pollutants', axis=1)  

def vlues_tolist_drop(df):
    return df.join(pd.DataFrame(df['Pollutants'].values.tolist())).drop('Pollutants', axis=1)  

def pop_tolist(df):
    return df.join(pd.DataFrame(df.pop('Pollutants').tolist()))  

def pop_values_tolist(df):
    return df.join(pd.DataFrame(df.pop('Pollutants').values.tolist()))

>>> %timeit apply_drop(df.copy())
1 loop, best of 3: 53min 20s per loop
>>> %timeit json_normalise_drop(df.copy())
1 loop, best of 3: 54.9 s per loop
>>> %timeit tolist_drop(df.copy())
1 loop, best of 3: 6.62 s per loop
>>> %timeit vlues_tolist_drop(df.copy())
1 loop, best of 3: 6.63 s per loop
>>> %timeit pop_tolist(df.copy())
1 loop, best of 3: 5.99 s per loop
>>> %timeit pop_values_tolist(df.copy())
1 loop, best of 3: 5.94 s per loop
+---------------------+-----------+
| apply_drop          | 53min 20s |
| json_normalise_drop |    54.9 s |
| tolist_drop         |    6.62 s |
| vlues_tolist_drop   |    6.63 s |
| pop_tolist          |    5.99 s |
| pop_values_tolist   |    5.94 s |
+---------------------+-----------+

df.join(pd.DataFrame(df.pop(' contaminants ').values.tolist()))是最快的

其他回答

梅林的答案更好,也超级简单,但我们不需要lambda函数。对dictionary的求值可以通过以下两种方法中的任何一种安全地忽略,如下所示:

方法一:两步

# step 1: convert the `Pollutants` column to Pandas dataframe series
df_pol_ps = data_df['Pollutants'].apply(pd.Series)

df_pol_ps:
    a   b   c
0   46  3   12
1   36  5   8
2   NaN 2   7
3   NaN NaN 11
4   82  NaN 15

# step 2: concat columns `a, b, c` and drop/remove the `Pollutants` 
df_final = pd.concat([df, df_pol_ps], axis = 1).drop('Pollutants', axis = 1)

df_final:
    StationID   a   b   c
0   8809    46  3   12
1   8810    36  5   8
2   8811    NaN 2   7
3   8812    NaN NaN 11
4   8813    82  NaN 15

方法二:以上两个步骤可以一气呵成:

df_final = pd.concat([df, df['Pollutants'].apply(pd.Series)], axis = 1).drop('Pollutants', axis = 1)

df_final:
    StationID   a   b   c
0   8809    46  3   12
1   8810    36  5   8
2   8811    NaN 2   7
3   8812    NaN NaN 11
4   8813    82  NaN 15

my_df = pd.DataFrame.from_dict(my_dict, orient='index', columns=['my_col'])

. .将正确地解析字典(将每个字典键放入单独的df列,键值放入df行),因此字典将不会首先被压缩到单个列中。

注意:对于深度为1的字典(一级)

>>> df

   Station ID                        Pollutants
0        8809  {"a": "46", "b": "3", "c": "12"}
1        8810   {"a": "36", "b": "5", "c": "8"}
2        8811              {"b": "2", "c": "7"}
3        8812                       {"c": "11"}
4        8813            {"a": "82", "c": "15"}

对1000万行的大型数据集进行快速比较

>>> df = pd.concat([df]*2000000).reset_index(drop=True)
>>> print(df.shape)
(10000000, 2)
def apply_drop(df):
    return df.join(df['Pollutants'].apply(pd.Series)).drop('Pollutants', axis=1)  

def json_normalise_drop(df):
    return df.join(pd.json_normalize(df.Pollutants)).drop('Pollutants', axis=1)  

def tolist_drop(df):
    return df.join(pd.DataFrame(df['Pollutants'].tolist())).drop('Pollutants', axis=1)  

def vlues_tolist_drop(df):
    return df.join(pd.DataFrame(df['Pollutants'].values.tolist())).drop('Pollutants', axis=1)  

def pop_tolist(df):
    return df.join(pd.DataFrame(df.pop('Pollutants').tolist()))  

def pop_values_tolist(df):
    return df.join(pd.DataFrame(df.pop('Pollutants').values.tolist()))

>>> %timeit apply_drop(df.copy())
1 loop, best of 3: 53min 20s per loop
>>> %timeit json_normalise_drop(df.copy())
1 loop, best of 3: 54.9 s per loop
>>> %timeit tolist_drop(df.copy())
1 loop, best of 3: 6.62 s per loop
>>> %timeit vlues_tolist_drop(df.copy())
1 loop, best of 3: 6.63 s per loop
>>> %timeit pop_tolist(df.copy())
1 loop, best of 3: 5.99 s per loop
>>> %timeit pop_values_tolist(df.copy())
1 loop, best of 3: 5.94 s per loop
+---------------------+-----------+
| apply_drop          | 53min 20s |
| json_normalise_drop |    54.9 s |
| tolist_drop         |    6.62 s |
| vlues_tolist_drop   |    6.63 s |
| pop_tolist          |    5.99 s |
| pop_values_tolist   |    5.94 s |
+---------------------+-----------+

df.join(pd.DataFrame(df.pop(' contaminants ').values.tolist()))是最快的

如何用熊猫将一列字典拆分为单独的列?

pd.DataFrame(df['val'].tolist())是分解字典列的规范方法

这是用彩色图表证明的。

基准测试代码供参考。

请注意,我只是计算爆炸的时间,因为这是回答这个问题最有趣的部分——结果构造的其他方面(例如是否使用pop或drop)与讨论无关,可以忽略(但是应该注意的是,使用pop避免了后续的drop调用,因此最终的解决方案性能更好一些,但我们仍然在列出列并将其传递给pd。DataFrame)。

此外,pop破坏性地改变了输入DataFrame,使得在基准测试代码中运行变得更加困难,因为基准测试代码假设输入在测试运行中没有改变。


对其他解决方案的批评

df['val'].apply(pd.Series) is extremely slow for large N as pandas constructs Series objects for each row, then proceeds to construct a DataFrame from them. For larger N the performance dips to the order of minutes or hours. pd.json_normalize(df['val'])) is slower simply because json_normalize is meant to work with a much more complex input data - particularly deeply nested JSON with multiple record paths and metadata. We have a simple flat dict for which pd.DataFrame suffices, so use that if your dicts are flat. Some answers suggest df.pop('val').values.tolist() or df.pop('val').to_numpy().tolist(). I don't think it makes much of a difference whether you listify the series or the numpy array. It's one operation less to listify the series directly and really isn't slower so I'd recommend avoiding generating the numpy array in the intermediate step.

我强烈推荐提取“污染物”一栏的方法:

df_contaminants = pd.DataFrame(df[' contaminants '].values.tolist(), index=df.index)

它比

df_contaminants = df[' contaminants '].apply(pd.Series)

当df的值很大时。