我如何找到所有行的熊猫DataFrame有最大的值为计数列,分组后['Sp','Mt']列?
例1:下面的DataFrame,我用['Sp','Mt']分组:
Sp Mt Value count
0 MM1 S1 a **3**
1 MM1 S1 n 2
2 MM1 S3 cb **5**
3 MM2 S3 mk **8**
4 MM2 S4 bg **10**
5 MM2 S4 dgd 1
6 MM4 S2 rd 2
7 MM4 S2 cb 2
8 MM4 S2 uyi **7**
预期输出是得到每组中数量最大的结果行,如下所示:
0 MM1 S1 a **3**
2 MM1 S3 cb **5**
3 MM2 S3 mk **8**
4 MM2 S4 bg **10**
8 MM4 S2 uyi **7**
例2:这个DataFrame,我用['Sp','Mt']分组:
Sp Mt Value count
4 MM2 S4 bg 10
5 MM2 S4 dgd 1
6 MM4 S2 rd 2
7 MM4 S2 cb 8
8 MM4 S2 uyi 8
预期输出是获得每组中count等于max的所有行,如下所示:
Sp Mt Value count
4 MM2 S4 bg 10
7 MM4 S2 cb 8
8 MM4 S2 uyi 8
df = pd.DataFrame({
'sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4','MM4'],
'mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
'val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
'count' : [3,2,5,8,10,1,2,2,7]
})
df.groupby(['sp', 'mt']).apply(lambda grp: grp.nlargest(1, 'count'))
综上所述,有很多方法,但哪一种更快呢?
import pandas as pd
import numpy as np
import time
df = pd.DataFrame(np.random.randint(1,10,size=(1000000, 2)), columns=list('AB'))
start_time = time.time()
df1idx = df.groupby(['A'])['B'].transform(max) == df['B']
df1 = df[df1idx]
print("---1 ) %s seconds ---" % (time.time() - start_time))
start_time = time.time()
df2 = df.sort_values('B').groupby(['A']).tail(1)
print("---2 ) %s seconds ---" % (time.time() - start_time))
start_time = time.time()
df3 = df.sort_values('B').drop_duplicates(['A'],keep='last')
print("---3 ) %s seconds ---" % (time.time() - start_time))
start_time = time.time()
df3b = df.sort_values('B', ascending=False).drop_duplicates(['A'])
print("---3b) %s seconds ---" % (time.time() - start_time))
start_time = time.time()
df4 = df[df['B'] == df.groupby(['A'])['B'].transform(max)]
print("---4 ) %s seconds ---" % (time.time() - start_time))
start_time = time.time()
d = df.groupby('A')['B'].nlargest(1)
df5 = df.iloc[[i[1] for i in d.index], :]
print("---5 ) %s seconds ---" % (time.time() - start_time))
获胜者是……
——1)0.03337574005126953秒——
——2)0.1346898078918457秒——
——3)0.10243558883666992秒——
——3b) 0.1004343032836914秒——
——4)0.028397560119628906秒——
——5)0.07552886009216309秒——
意识到“应用”“nmaximum”到groupby对象同样有效:
额外的优势-也可以获取前n个值,如果需要:
In [85]: import pandas as pd
In [86]: df = pd.DataFrame({
...: 'sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4','MM4'],
...: 'mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
...: 'val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
...: 'count' : [3,2,5,8,10,1,2,2,7]
...: })
## Apply nlargest(1) to find the max val df, and nlargest(n) gives top n values for df:
In [87]: df.groupby(["sp", "mt"]).apply(lambda x: x.nlargest(1, "count")).reset_index(drop=True)
Out[87]:
count mt sp val
0 3 S1 MM1 a
1 5 S3 MM1 cb
2 8 S3 MM2 mk
3 10 S4 MM2 bg
4 7 S2 MM4 uyi
我在很多组操作中都使用了这种函数风格:
df = pd.DataFrame({
'Sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4', 'MM4'],
'Mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
'Val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
'Count' : [3,2,5,8,10,1,2,2,7]
})
df.groupby('Mt')\
.apply(lambda group: group[group.Count == group.Count.max()])\
.reset_index(drop=True)
sp mt val count
0 MM1 S1 a 3
1 MM4 S2 uyi 7
2 MM2 S3 mk 8
3 MM2 S4 bg 10
.reset_index(drop=True)通过删除group-index返回到原始索引。
您可能不需要执行groupby(),而是同时使用sort_values + drop_duplicate
df.sort_values('count').drop_duplicates(['Sp', 'Mt'], keep='last')
Out[190]:
Sp Mt Value count
0 MM1 S1 a 3
2 MM1 S3 cb 5
8 MM4 S2 uyi 7
3 MM2 S3 mk 8
4 MM2 S4 bg 10
使用tail也是同样的逻辑
df.sort_values('count').groupby(['Sp', 'Mt']).tail(1)
Out[52]:
Sp Mt Value count
0 MM1 S1 a 3
2 MM1 S3 cb 5
8 MM4 S2 uyi 7
3 MM2 S3 mk 8
4 MM2 S4 bg 10