我如何才能找到某一列的值是最大的行?
df.max()会给我每一列的最大值,我不知道如何得到相应的行。
我如何才能找到某一列的值是最大的行?
df.max()会给我每一列的最大值,我不知道如何得到相应的行。
当前回答
Use:
data.iloc[data['A'].idxmax()]
data['A'].idxmax() -根据行查找最大值位置 Data.iloc() -返回行
其他回答
非常简单:我们有如下所示的df,我们想在C中打印一行max值:
A B C
x 1 4
y 2 10
z 5 9
In:
df.loc[df['C'] == df['C'].max()] # condition check
Out:
A B C
y 2 10
Use:
data.iloc[data['A'].idxmax()]
data['A'].idxmax() -根据行查找最大值位置 Data.iloc() -返回行
mx.iloc[0].idxmax()
这一行代码将告诉你如何从dataframe中的一行中找到最大值,这里mx是dataframe, iloc[0]表示第0个索引。
如果有多行取最大值,上述两个答案都只返回一个索引。如果你想要所有的行,似乎没有一个函数。 但这并不难做到。下面是一个Series的例子;DataFrame也可以这样做:
In [1]: from pandas import Series, DataFrame
In [2]: s=Series([2,4,4,3],index=['a','b','c','d'])
In [3]: s.idxmax()
Out[3]: 'b'
In [4]: s[s==s.max()]
Out[4]:
b 4
c 4
dtype: int64
使用pandas的idxmax函数。这是简单的:
>>> import pandas
>>> import numpy as np
>>> df = pandas.DataFrame(np.random.randn(5,3),columns=['A','B','C'])
>>> df
A B C
0 1.232853 -1.979459 -0.573626
1 0.140767 0.394940 1.068890
2 0.742023 1.343977 -0.579745
3 2.125299 -0.649328 -0.211692
4 -0.187253 1.908618 -1.862934
>>> df['A'].idxmax()
3
>>> df['B'].idxmax()
4
>>> df['C'].idxmax()
1
Alternatively you could also use numpy.argmax, such as numpy.argmax(df['A']) -- it provides the same thing, and appears at least as fast as idxmax in cursory observations. idxmax() returns indices labels, not integers. Example': if you have string values as your index labels, like rows 'a' through 'e', you might want to know that the max occurs in row 4 (not row 'd'). if you want the integer position of that label within the Index you have to get it manually (which can be tricky now that duplicate row labels are allowed).
历史记录:
idxmax() used to be called argmax() prior to 0.11 argmax was deprecated prior to 1.0.0 and removed entirely in 1.0.0 back as of Pandas 0.16, argmax used to exist and perform the same function (though appeared to run more slowly than idxmax). argmax function returned the integer position within the index of the row location of the maximum element. pandas moved to using row labels instead of integer indices. Positional integer indices used to be very common, more common than labels, especially in applications where duplicate row labels are common.
例如,考虑这个玩具DataFrame带有重复的行标签:
In [19]: dfrm
Out[19]:
A B C
a 0.143693 0.653810 0.586007
b 0.623582 0.312903 0.919076
c 0.165438 0.889809 0.000967
d 0.308245 0.787776 0.571195
e 0.870068 0.935626 0.606911
f 0.037602 0.855193 0.728495
g 0.605366 0.338105 0.696460
h 0.000000 0.090814 0.963927
i 0.688343 0.188468 0.352213
i 0.879000 0.105039 0.900260
In [20]: dfrm['A'].idxmax()
Out[20]: 'i'
In [21]: dfrm.iloc[dfrm['A'].idxmax()] # .ix instead of .iloc in older versions of pandas
Out[21]:
A B C
i 0.688343 0.188468 0.352213
i 0.879000 0.105039 0.900260
因此,这里简单地使用idxmax是不够的,而旧形式的argmax可以正确地提供最大行的位置位置(在本例中为位置9)。
This is exactly one of those nasty kinds of bug-prone behaviors in dynamically typed languages that makes this sort of thing so unfortunate, and worth beating a dead horse over. If you are writing systems code and your system suddenly gets used on some data sets that are not cleaned properly before being joined, it's very easy to end up with duplicate row labels, especially string labels like a CUSIP or SEDOL identifier for financial assets. You can't easily use the type system to help you out, and you may not be able to enforce uniqueness on the index without running into unexpectedly missing data.
So you're left with hoping that your unit tests covered everything (they didn't, or more likely no one wrote any tests) -- otherwise (most likely) you're just left waiting to see if you happen to smack into this error at runtime, in which case you probably have to go drop many hours worth of work from the database you were outputting results to, bang your head against the wall in IPython trying to manually reproduce the problem, finally figuring out that it's because idxmax can only report the label of the max row, and then being disappointed that no standard function automatically gets the positions of the max row for you, writing a buggy implementation yourself, editing the code, and praying you don't run into the problem again.