我有一个熊猫数据帧df像:

a b
A 1
A 2
B 5
B 5
B 4
C 6

我想按第一列分组,并将第二列作为行中的列表:

A [1,2]
B [5,5,4]
C [6]

是否有可能使用pandas groupby来做这样的事情?


当前回答

是时候使用agg而不是apply了。

When

df = pd.DataFrame( {'a':['A','A','B','B','B','C'], 'b':[1,2,5,5,4,6], 'c': [1,2,5,5,4,6]})

如果你想让多个列堆叠到列表中,结果是pd。DataFrame

df.groupby('a')[['b', 'c']].agg(list)
# or 
df.groupby('a').agg(list)

如果你想在列表中单列,结果在ps.Series

df.groupby('a')['b'].agg(list)
#or
df.groupby('a')['b'].apply(list)

注意,结果为pd。当你只聚合单列时,DataFrame大约比ps.Series中的结果慢10倍,在多列情况下使用它。

其他回答

要解决一个数据框架的几个列的问题:

In [5]: df = pd.DataFrame( {'a':['A','A','B','B','B','C'], 'b':[1,2,5,5,4,6],'c'
   ...: :[3,3,3,4,4,4]})

In [6]: df
Out[6]: 
   a  b  c
0  A  1  3
1  A  2  3
2  B  5  3
3  B  5  4
4  B  4  4
5  C  6  4

In [7]: df.groupby('a').agg(lambda x: list(x))
Out[7]: 
           b          c
a                      
A     [1, 2]     [3, 3]
B  [5, 5, 4]  [3, 4, 4]
C        [6]        [4]

这个答案的灵感来自Anamika Modi的回答。谢谢你!

使用以下任何一种分组和agg食谱。

# Setup
df = pd.DataFrame({
  'a': ['A', 'A', 'B', 'B', 'B', 'C'],
  'b': [1, 2, 5, 5, 4, 6],
  'c': ['x', 'y', 'z', 'x', 'y', 'z']
})
df

   a  b  c
0  A  1  x
1  A  2  y
2  B  5  z
3  B  5  x
4  B  4  y
5  C  6  z

要将多个列聚合为列表,请使用以下任何一种方法:

df.groupby('a').agg(list)
df.groupby('a').agg(pd.Series.tolist)

           b          c
a                      
A     [1, 2]     [x, y]
B  [5, 5, 4]  [z, x, y]
C        [6]        [z]

若要只对单个列进行组列化,请将groupby转换为SeriesGroupBy对象,然后调用SeriesGroupBy.agg。使用,

df.groupby('a').agg({'b': list})  # 4.42 ms 
df.groupby('a')['b'].agg(list)    # 2.76 ms - faster

a
A       [1, 2]
B    [5, 5, 4]
C          [6]
Name: b, dtype: object

基于@B。M的答案,这里是一个更通用的版本,并更新为与更新的库版本一起工作:(numpy版本1.19.2,pandas版本1.2.1) 这个解决方案也可以处理多指标:

然而,这并没有经过严格的测试,请谨慎使用。

如果性能是重要的,下降到numpy级别:

import pandas as pd
import numpy as np

np.random.seed(0)
df = pd.DataFrame({'a': np.random.randint(0, 10, 90), 'b': [1,2,3]*30, 'c':list('abcefghij')*10, 'd': list('hij')*30})


def f_multi(df,col_names):
    if not isinstance(col_names,list):
        col_names = [col_names]
        
    values = df.sort_values(col_names).values.T

    col_idcs = [df.columns.get_loc(cn) for cn in col_names]
    other_col_names = [name for idx, name in enumerate(df.columns) if idx not in col_idcs]
    other_col_idcs = [df.columns.get_loc(cn) for cn in other_col_names]

    # split df into indexing colums(=keys) and data colums(=vals)
    keys = values[col_idcs,:]
    vals = values[other_col_idcs,:]
    
    # list of tuple of key pairs
    multikeys = list(zip(*keys))
    
    # remember unique key pairs and ther indices
    ukeys, index = np.unique(multikeys, return_index=True, axis=0)
    
    # split data columns according to those indices
    arrays = np.split(vals, index[1:], axis=1)

    # resulting list of subarrays has same number of subarrays as unique key pairs
    # each subarray has the following shape:
    #    rows = number of non-grouped data columns
    #    cols = number of data points grouped into that unique key pair
    
    # prepare multi index
    idx = pd.MultiIndex.from_arrays(ukeys.T, names=col_names) 

    list_agg_vals = dict()
    for tup in zip(*arrays, other_col_names):
        col_vals = tup[:-1] # first entries are the subarrays from above 
        col_name = tup[-1]  # last entry is data-column name
        
        list_agg_vals[col_name] = col_vals

    df2 = pd.DataFrame(data=list_agg_vals, index=idx)
    return df2

测试:

In [227]: %timeit f_multi(df, ['a','d'])

2.54 ms ± 64.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [228]: %timeit df.groupby(['a','d']).agg(list)

4.56 ms ± 61.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)


结果:

对于随机种子0,将得到:

这里我用“|”作为分隔符对元素进行分组

    import pandas as pd

    df = pd.read_csv('input.csv')

    df
    Out[1]:
      Area  Keywords
    0  A  1
    1  A  2
    2  B  5
    3  B  5
    4  B  4
    5  C  6

    df.dropna(inplace =  True)
    df['Area']=df['Area'].apply(lambda x:x.lower().strip())
    print df.columns
    df_op = df.groupby('Area').agg({"Keywords":lambda x : "|".join(x)})

    df_op.to_csv('output.csv')
    Out[2]:
    df_op
    Area  Keywords

    A       [1| 2]
    B    [5| 5| 4]
    C          [6]

排序耗时O(nlog(n)),是上述方案中耗时最多的操作

对于简单的解决方案(含单列)pd.Series。除非考虑其他框架,否则To_list可以工作并且可以被认为更有效

e.g.

import pandas as pd
from string import ascii_lowercase
import random

def generate_string(case=4):
    return ''.join([random.choice(ascii_lowercase) for _ in range(case)])

df = pd.DataFrame({'num_val':[random.randint(0,100) for _ in range(20000000)],'string_val':[generate_string() for _ in range(20000000)]})


%timeit df.groupby('string_val').agg({'num_val':pd.Series.to_list})

对于2000万条记录,大约需要17.2秒。相比之下,apply(list)大约需要19.2秒,lambda函数大约需要20.6秒