pandas drop_duplicate函数对于“唯一化”一个数据帧非常有用。我想删除在列的子集上重复的所有行。这可能吗?

    A   B   C
0   foo 0   A
1   foo 1   A
2   foo 1   B
3   bar 1   A

例如,我想删除与列A和C匹配的行,因此这应该删除行0和1。


当前回答

如果你想将结果存储在另一个数据集中:

df.drop_duplicates(keep=False)

or

df.drop_duplicates(keep=False, inplace=False)

如果需要更新相同的数据集:

df.drop_duplicates(keep=False, inplace=True)

上面的例子将删除所有重复项并保留一个,类似于SQL中的DISTINCT *

其他回答

如果你想用try和except语句检查两列,这个可以帮你。

if "column_2" in df.columns:
    try:
        df[['column_1', "column_2"]] = df[['header', "column_2"]].drop_duplicates(subset = ["column_2", "column_1"] ,keep="first")
    except:
        df[["column_2"]] = df[["column_2"]].drop_duplicates(subset="column_2" ,keep="first")
        print(f"No column_1 for {path}.")
try:
    df[["column_1"]] = df[["column_1"]].drop_duplicates(subset="column_1" ,keep="first")
except:
    print(f"No column_1 or column_2 for {path}.")

只是想添加到Ben关于drop_duplicate的答案:

keep: {' first ', ' last ', False},默认' first '

first:删除除第一次出现之外的重复项。 last:删除除最后一次出现之外的重复项。 False:删除所有副本。

所以将keep设置为False会给你想要的答案。

DataFrame.drop_duplicates(*args, **kwargs) Return DataFrame with duplicate rows removed, optionally only considering certain columns Parameters: subset : column label or sequence of labels, optional Only consider certain columns for identifying duplicates, by default use all of the columns keep : {‘first’, ‘last’, False}, default ‘first’ first : Drop duplicates except for the first occurrence. last : Drop duplicates except for the last occurrence. False : Drop all duplicates. take_last : deprecated inplace : boolean, default False Whether to drop duplicates in place or to return a copy cols : kwargs only argument of subset [deprecated] Returns: deduplicated : DataFrame

使用drop_duplicate和keep参数,这在pandas中要容易得多。

import pandas as pd
df = pd.DataFrame({"A":["foo", "foo", "foo", "bar"], "B":[0,1,1,1], "C":["A","A","B","A"]})
df.drop_duplicates(subset=['A', 'C'], keep=False)

使用groupby和filter

import pandas as pd
df = pd.DataFrame({"A":["foo", "foo", "foo", "bar"], "B":[0,1,1,1], "C":["A","A","B","A"]})
df.groupby(["A", "C"]).filter(lambda df:df.shape[0] == 1)

如果你想将结果存储在另一个数据集中:

df.drop_duplicates(keep=False)

or

df.drop_duplicates(keep=False, inplace=False)

如果需要更新相同的数据集:

df.drop_duplicates(keep=False, inplace=True)

上面的例子将删除所有重复项并保留一个,类似于SQL中的DISTINCT *