pandas drop_duplicate函数对于“唯一化”一个数据帧非常有用。我想删除在列的子集上重复的所有行。这可能吗?

    A   B   C
0   foo 0   A
1   foo 1   A
2   foo 1   B
3   bar 1   A

例如,我想删除与列A和C匹配的行,因此这应该删除行0和1。


当前回答

您可以使用replicated()标记所有重复的行,并过滤掉标记的行。如果你以后需要将列赋值给new_df,确保调用.copy(),这样你以后就不会得到SettingWithCopyWarning。

new_df = df[~df.duplicated(subset=['A', 'C'], keep=False)].copy()


该方法的一个很好的特性是,您可以有条件地使用它删除重复项。例如,仅当列A等于'foo'时删除所有重复的行,您可以使用以下代码。

new_df = df[~( df.duplicated(subset=['A', 'B', 'C'], keep=False) & df['A'].eq('foo') )].copy()


此外,如果您不希望按名称写出列,您可以传递df的切片。列到子集=。对于drop_duplicate()也是如此。

# to consider all columns for identifying duplicates
df[~df.duplicated(subset=df.columns, keep=False)].copy()

# the same is true for drop_duplicates
df.drop_duplicates(subset=df.columns, keep=False)

# to consider columns in positions 0 and 2 (i.e. 'A' and 'C') for identifying duplicates
df.drop_duplicates(subset=df.columns[[0, 2]], keep=False)

其他回答

试试这些不同的方法

df = pd.DataFrame({"A":["foo", "foo", "foo", "bar","foo"], "B":[0,1,1,1,1], "C":["A","A","B","A","A"]})

>>>df.drop_duplicates( "A" , keep='first')

or

>>>df.drop_duplicates( keep='first')

or

>>>df.drop_duplicates( keep='last')

使用groupby和filter

import pandas as pd
df = pd.DataFrame({"A":["foo", "foo", "foo", "bar"], "B":[0,1,1,1], "C":["A","A","B","A"]})
df.groupby(["A", "C"]).filter(lambda df:df.shape[0] == 1)

您可以使用replicated()标记所有重复的行,并过滤掉标记的行。如果你以后需要将列赋值给new_df,确保调用.copy(),这样你以后就不会得到SettingWithCopyWarning。

new_df = df[~df.duplicated(subset=['A', 'C'], keep=False)].copy()


该方法的一个很好的特性是,您可以有条件地使用它删除重复项。例如,仅当列A等于'foo'时删除所有重复的行,您可以使用以下代码。

new_df = df[~( df.duplicated(subset=['A', 'B', 'C'], keep=False) & df['A'].eq('foo') )].copy()


此外,如果您不希望按名称写出列,您可以传递df的切片。列到子集=。对于drop_duplicate()也是如此。

# to consider all columns for identifying duplicates
df[~df.duplicated(subset=df.columns, keep=False)].copy()

# the same is true for drop_duplicates
df.drop_duplicates(subset=df.columns, keep=False)

# to consider columns in positions 0 and 2 (i.e. 'A' and 'C') for identifying duplicates
df.drop_duplicates(subset=df.columns[[0, 2]], keep=False)

如果你想将结果存储在另一个数据集中:

df.drop_duplicates(keep=False)

or

df.drop_duplicates(keep=False, inplace=False)

如果需要更新相同的数据集:

df.drop_duplicates(keep=False, inplace=True)

上面的例子将删除所有重复项并保留一个,类似于SQL中的DISTINCT *

使用drop_duplicate和keep参数,这在pandas中要容易得多。

import pandas as pd
df = pd.DataFrame({"A":["foo", "foo", "foo", "bar"], "B":[0,1,1,1], "C":["A","A","B","A"]})
df.drop_duplicates(subset=['A', 'C'], keep=False)