如何从列表中删除重复项,同时保持顺序?使用集合删除重复项会破坏原始顺序。 是否有内置的或python的习语?
当前回答
我不是在找死马(这个问题已经很老了,已经有很多好的答案了),但是这里有一个使用熊猫的解决方案,在很多情况下都非常快,而且使用起来非常简单。
import pandas as pd
my_list = [0, 1, 2, 3, 4, 1, 2, 3, 5]
>>> pd.Series(my_list).drop_duplicates().tolist()
# Output:
# [0, 1, 2, 3, 4, 5]
其他回答
x = [1, 2, 1, 3, 1, 4]
# brute force method
arr = []
for i in x:
if not i in arr:
arr.insert(x[i],i)
# recursive method
tmp = []
def remove_duplicates(j=0):
if j < len(x):
if not x[j] in tmp:
tmp.append(x[j])
i = j+1
remove_duplicates(i)
remove_duplicates()
1. 这些解决方案很好…… 为了在保留秩序的同时删除重复项,本页其他地方提出了优秀的解决方案:
seen = set()
[x for x in seq if not (x in seen or seen.add(x))]
以及变化,例如:
seen = set()
[x for x in seq if x not in seen and not seen.add(x)]
确实很受欢迎,因为它们简单、极简,并部署了正确的哈希以获得最佳效率。关于这些方法的主要抱怨似乎是,将方法see .add(x)“返回”的不变量None用作逻辑表达式中的常量(因此是多余的/不必要的)值(只是为了它的副作用)是笨拙和/或令人困惑的。
2. …but they waste one hash lookup per iteration. Surprisingly, given the amount of discussion and debate on this topic, there is actually a significant improvement to the code that seems to have been overlooked. As shown, each "test-and-set" iteration requires two hash lookups: the first to test membership x not in seen and then again to actually add the value seen.add(x). Since the first operation guarantees that the second will always be successful, there is a wasteful duplication of effort here. And because the overall technique here is so efficient, the excess hash lookups will likely end up being the most expensive proportion of what little work remains.
3.相反,让布景完成它的工作吧! 注意,上面的例子只调用set。加上预见,这样做总是会导致集合成员的增加。集合本身永远没有机会拒绝副本;我们的代码片段实际上已经篡夺了这个角色。使用显式的两步测试和设置代码剥夺了set自身排除这些重复的核心能力。
4. 单哈希查找代码: 下面的版本将每次迭代的哈希查找次数减少了一半,从两次减少到只有一次。
seen = set()
[x for x in seq if len(seen) < len(seen.add(x) or seen)]
就地方法
这个方法是二次的,因为我们对列表中的每个元素都有一个线性查找(由于del,我们必须加上重新排列列表的代价)。
也就是说,如果我们从列表的末尾开始,并向原点前进,删除出现在其左侧子列表中的每一项,就有可能在原地操作
这个想法在代码中很简单
for i in range(len(l)-1,0,-1):
if l[i] in l[:i]: del l[i]
实现的简单测试
In [91]: from random import randint, seed
In [92]: seed('20080808') ; l = [randint(1,6) for _ in range(12)] # Beijing Olympics
In [93]: for i in range(len(l)-1,0,-1):
...: print(l)
...: print(i, l[i], l[:i], end='')
...: if l[i] in l[:i]:
...: print( ': remove', l[i])
...: del l[i]
...: else:
...: print()
...: print(l)
[6, 5, 1, 4, 6, 1, 6, 2, 2, 4, 5, 2]
11 2 [6, 5, 1, 4, 6, 1, 6, 2, 2, 4, 5]: remove 2
[6, 5, 1, 4, 6, 1, 6, 2, 2, 4, 5]
10 5 [6, 5, 1, 4, 6, 1, 6, 2, 2, 4]: remove 5
[6, 5, 1, 4, 6, 1, 6, 2, 2, 4]
9 4 [6, 5, 1, 4, 6, 1, 6, 2, 2]: remove 4
[6, 5, 1, 4, 6, 1, 6, 2, 2]
8 2 [6, 5, 1, 4, 6, 1, 6, 2]: remove 2
[6, 5, 1, 4, 6, 1, 6, 2]
7 2 [6, 5, 1, 4, 6, 1, 6]
[6, 5, 1, 4, 6, 1, 6, 2]
6 6 [6, 5, 1, 4, 6, 1]: remove 6
[6, 5, 1, 4, 6, 1, 2]
5 1 [6, 5, 1, 4, 6]: remove 1
[6, 5, 1, 4, 6, 2]
4 6 [6, 5, 1, 4]: remove 6
[6, 5, 1, 4, 2]
3 4 [6, 5, 1]
[6, 5, 1, 4, 2]
2 1 [6, 5]
[6, 5, 1, 4, 2]
1 5 [6]
[6, 5, 1, 4, 2]
In [94]:
只是从外部module1中添加这样一个功能的另一个(非常高性能的)实现:
>>> from iteration_utilities import unique_everseen
>>> lst = [1,1,1,2,3,2,2,2,1,3,4]
>>> list(unique_everseen(lst))
[1, 2, 3, 4]
计时
我做了一些计时(Python 3.6),这些表明它比我测试的所有其他替代方案都快,包括OrderedDict.fromkeys, f7和more_itertools.unique_everseen:
%matplotlib notebook
from iteration_utilities import unique_everseen
from collections import OrderedDict
from more_itertools import unique_everseen as mi_unique_everseen
def f7(seq):
seen = set()
seen_add = seen.add
return [x for x in seq if not (x in seen or seen_add(x))]
def iteration_utilities_unique_everseen(seq):
return list(unique_everseen(seq))
def more_itertools_unique_everseen(seq):
return list(mi_unique_everseen(seq))
def odict(seq):
return list(OrderedDict.fromkeys(seq))
from simple_benchmark import benchmark
b = benchmark([f7, iteration_utilities_unique_everseen, more_itertools_unique_everseen, odict],
{2**i: list(range(2**i)) for i in range(1, 20)},
'list size (no duplicates)')
b.plot()
为了确保这一点,我还做了一个重复的测试,看看是否有区别:
import random
b = benchmark([f7, iteration_utilities_unique_everseen, more_itertools_unique_everseen, odict],
{2**i: [random.randint(0, 2**(i-1)) for _ in range(2**i)] for i in range(1, 20)},
'list size (lots of duplicates)')
b.plot()
一个只包含一个值:
b = benchmark([f7, iteration_utilities_unique_everseen, more_itertools_unique_everseen, odict],
{2**i: [1]*(2**i) for i in range(1, 20)},
'list size (only duplicates)')
b.plot()
在所有这些情况下,iteration_utilities。Unique_everseen函数是最快的(在我的电脑上)。
这iteration_utilities。unique_everseen函数也可以处理输入中的不可哈希值(但是当值是可哈希值时,性能是O(n*n)而不是O(n))。
>>> lst = [{1}, {1}, {2}, {1}, {3}]
>>> list(unique_everseen(lst))
[{1}, {2}, {3}]
1免责声明:我是该软件包的作者。
不使用导入模块或集的解决方案:
text = "ask not what your country can do for you ask what you can do for your country"
sentence = text.split(" ")
noduplicates = [(sentence[i]) for i in range (0,len(sentence)) if sentence[i] not in sentence[:i]]
print(noduplicates)
给输出:
['ask', 'not', 'what', 'your', 'country', 'can', 'do', 'for', 'you']