我如何才能实现SQL的IN和NOT IN的等价?

我有一个所需值的列表。 场景如下:

df = pd.DataFrame({'country': ['US', 'UK', 'Germany', 'China']})
countries_to_keep = ['UK', 'China']

# pseudo-code:
df[df['country'] not in countries_to_keep]

我目前的做法如下:

df = pd.DataFrame({'country': ['US', 'UK', 'Germany', 'China']})
df2 = pd.DataFrame({'country': ['UK', 'China'], 'matched': True})

# IN
df.merge(df2, how='inner', on='country')

# NOT IN
not_in = df.merge(df2, how='left', on='country')
not_in = not_in[pd.isnull(not_in['matched'])]

但这似乎是一个可怕的拼凑。有人能改进吗?


你可以使用pd.Series.isin。

对于“IN”的用法:something.isin(某处)

或者对于"NOT IN": ~something.isin(某处)

举个例子:

>>> df
    country
0        US
1        UK
2   Germany
3     China
>>> countries_to_keep
['UK', 'China']
>>> df.country.isin(countries_to_keep)
0    False
1     True
2    False
3     True
Name: country, dtype: bool
>>> df[df.country.isin(countries_to_keep)]
    country
1        UK
3     China
>>> df[~df.country.isin(countries_to_keep)]
    country
0        US
2   Germany

我通常对行进行泛型过滤,像这样:

criterion = lambda row: row['countries'] not in countries
not_in = df[df.apply(criterion, axis=1)]

我想过滤出dfbc行,有一个BUSINESS_ID,也是在dfProfilesBusIds的BUSINESS_ID

dfbc = dfbc[~dfbc['BUSINESS_ID'].isin(dfProfilesBusIds['BUSINESS_ID'])]

使用.query()方法的替代解决方案:

In [5]: df.query("countries in @countries_to_keep")
Out[5]:
  countries
1        UK
3     China

In [6]: df.query("countries not in @countries_to_keep")
Out[6]:
  countries
0        US
2   Germany

df = pd.DataFrame({'countries':['US','UK','Germany','China']})
countries = ['UK','China']

实现:

df[df.countries.isin(countries)]

实施不像在其他国家:

df[df.countries.isin([x for x in np.unique(df.countries) if x not in countries])]

如何实现“在”和“不在”的熊猫数据框架?

Pandas提供了两种方法:系列。isin和DataFrame。isin分别用于Series和dataframe。


基于ONE列的过滤数据帧(也适用于系列)

最常见的场景是在特定列上应用isin条件来过滤数据帧中的行。

df = pd.DataFrame({'countries': ['US', 'UK', 'Germany', np.nan, 'China']})
df
  countries
0        US
1        UK
2   Germany
3     China

c1 = ['UK', 'China']             # list
c2 = {'Germany'}                 # set
c3 = pd.Series(['China', 'US'])  # Series
c4 = np.array(['US', 'UK'])      # array

系列。Isin接受各种类型作为输入。下面这些都是得到你想要的东西的有效方法:

df['countries'].isin(c1)

0    False
1     True
2    False
3    False
4     True
Name: countries, dtype: bool

# `in` operation
df[df['countries'].isin(c1)]

  countries
1        UK
4     China

# `not in` operation
df[~df['countries'].isin(c1)]

  countries
0        US
2   Germany
3       NaN

# Filter with `set` (tuples work too)
df[df['countries'].isin(c2)]

  countries
2   Germany

# Filter with another Series
df[df['countries'].isin(c3)]

  countries
0        US
4     China

# Filter with array
df[df['countries'].isin(c4)]

  countries
0        US
1        UK

多列过滤器

有时,你会想要在多个列上对一些搜索词应用“in”会员资格检查,

df2 = pd.DataFrame({
    'A': ['x', 'y', 'z', 'q'], 'B': ['w', 'a', np.nan, 'x'], 'C': np.arange(4)})
df2

   A    B  C
0  x    w  0
1  y    a  1
2  z  NaN  2
3  q    x  3

c1 = ['x', 'w', 'p']

要将isin条件应用于"A"和"B"列,请使用datafframe .isin:

df2[['A', 'B']].isin(c1)

      A      B
0   True   True
1  False  False
2  False  False
3  False   True

由此,为了保留至少有一列为True的行,我们可以使用第一个轴上的任意列:

df2[['A', 'B']].isin(c1).any(axis=1)

0     True
1    False
2    False
3     True
dtype: bool

df2[df2[['A', 'B']].isin(c1).any(axis=1)]

   A  B  C
0  x  w  0
3  q  x  3

注意,如果希望搜索每一列,只需省略列选择步骤,然后执行

df2.isin(c1).any(axis=1)

类似地,要保留ALL列为True的行,请以与以前相同的方式使用ALL。

df2[df2[['A', 'B']].isin(c1).all(axis=1)]

   A  B  C
0  x  w  0

值得一提的是:numpy。Isin、查询、列表推导式(字符串数据)

除了上面描述的方法之外,您还可以使用numpy的等效方法:numpy.isin。

# `in` operation
df[np.isin(df['countries'], c1)]

  countries
1        UK
4     China

# `not in` operation
df[np.isin(df['countries'], c1, invert=True)]

  countries
0        US
2   Germany
3       NaN

为什么值得考虑?NumPy函数通常比熊猫函数快一点,因为开销更低。因为这是一个元素操作,不依赖于索引对齐,所以很少有情况下这个方法不是pandas的isin的合适替代。

Pandas例程在处理字符串时通常是迭代的,因为字符串操作很难向量化。有很多证据表明,这里的列表理解会更快。 我们现在只能用支票了。

c1_set = set(c1) # Using `in` with `sets` is a constant time operation... 
                 # This doesn't matter for pandas because the implementation differs.
# `in` operation
df[[x in c1_set for x in df['countries']]]

  countries
1        UK
4     China

# `not in` operation
df[[x not in c1_set for x in df['countries']]]

  countries
0        US
2   Germany
3       NaN

但是,指定它要麻烦得多,所以除非您知道自己在做什么,否则不要使用它。

最后,还有DataFrame。在此答案中已涵盖的查询。numexpr增值!


从答案中整理出可能的解决方案:

用于:df[df['A']。isin ([3], 6)]

对于不在:

df [-df[“A”]。isin ([3], 6)] df [~ df[“A”]。isin ([3], 6)] df [df[“A”]。isin([3,6]) == False] df [np。logical_not“A”(df[]。isin ([3], 6))]


如果你想保持列表的顺序,有一个小技巧:

df = pd.DataFrame({'country': ['US', 'UK', 'Germany', 'China']})
countries_to_keep = ['Germany', 'US']


ind=[df.index[df['country']==i].tolist() for i in countries_to_keep]
flat_ind=[item for sublist in ind for item in sublist]

df.reindex(flat_ind)

   country
2  Germany
0       US

我的2c价值: 我需要一个数据框架的in和ifelse语句的组合,这对我来说很有用。

sale_method = pd.DataFrame(model_data["Sale Method"].str.upper())
sale_method["sale_classification"] = np.where(
    sale_method["Sale Method"].isin(["PRIVATE"]),
    "private",
    np.where(
        sale_method["Sale Method"].str.contains("AUCTION"), "auction", "other"
    ),
)

为什么没有人谈论各种过滤方法的性能?事实上,这个主题经常出现在这里(参见示例)。我为一个大型数据集做了自己的性能测试。这是非常有趣和有教育意义的。

df = pd.DataFrame({'animals': np.random.choice(['cat', 'dog', 'mouse', 'birds'], size=10**7), 
                   'number': np.random.randint(0,100, size=(10**7,))})

df.info()

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10000000 entries, 0 to 9999999
Data columns (total 2 columns):
 #   Column   Dtype 
---  ------   ----- 
 0   animals  object
 1   number   int64 
dtypes: int64(1), object(1)
memory usage: 152.6+ MB
%%timeit
# .isin() by one column
conditions = ['cat', 'dog']
df[df.animals.isin(conditions)]
367 ms ± 2.34 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
# .query() by one column
conditions = ['cat', 'dog']
df.query('animals in @conditions')
395 ms ± 3.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
# .loc[]
df.loc[(df.animals=='cat')|(df.animals=='dog')]
987 ms ± 5.17 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
df[df.apply(lambda x: x['animals'] in ['cat', 'dog'], axis=1)]
41.9 s ± 490 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
new_df = df.set_index('animals')
new_df.loc[['cat', 'dog'], :]
3.64 s ± 62.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
new_df = df.set_index('animals')
new_df[new_df.index.isin(['cat', 'dog'])]
469 ms ± 8.98 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
s = pd.Series(['cat', 'dog'], name='animals')
df.merge(s, on='animals', how='inner')
796 ms ± 30.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

因此,isin方法是最快的,而带有apply()的方法是最慢的,这并不奇怪。


你也可以在.query()中使用.isin():

df.query('country.isin(@countries_to_keep).values')

# Or alternatively:
df.query('country.isin(["UK", "China"]).values')

要否定你的查询,使用~:

df.query('~country.isin(@countries_to_keep).values')

更新:

另一种方法是使用比较操作符:

df.query('country == @countries_to_keep')

# Or alternatively:
df.query('country == ["UK", "China"]')

要对查询求反,使用!=:

df.query('country != @countries_to_keep')