如何从列表中删除重复项,同时保持顺序?使用集合删除重复项会破坏原始顺序。 是否有内置的或python的习语?


当前回答

对于另一个非常古老的问题的一个非常晚的回答:

itertools食谱有一个函数可以做到这一点,使用了见集技术,但是:

处理标准键函数。 不使用不体面的黑客。 通过预绑定优化循环。加,而不是查N次。(f7也这样做,但有些版本没有。) 通过使用ifilterfalse优化循环,因此只需遍历Python中唯一的元素,而不是所有元素。(当然,您仍然在ifilterfalse中遍历所有它们,但这是在C中,而且要快得多。)

Is it actually faster than f7? It depends on your data, so you'll have to test it and see. If you want a list in the end, f7 uses a listcomp, and there's no way to do that here. (You can directly append instead of yielding, or you can feed the generator into the list function, but neither one can be as fast as the LIST_APPEND inside a listcomp.) At any rate, usually, squeezing out a few microseconds is not going to be as important as having an easily-understandable, reusable, already-written function that doesn't require DSU when you want to decorate.

和所有的食谱一样,它也有更多的版本。

如果你只想要无键的情况,你可以简化为:

def unique(iterable):
    seen = set()
    seen_add = seen.add
    for element in itertools.ifilterfalse(seen.__contains__, iterable):
        seen_add(element)
        yield element

其他回答

MizardX的答案提供了多种方法的良好集合。

这是我自言自语时想到的:

mylist = [x for i,x in enumerate(mylist) if x not in mylist[i+1:]]

消除序列中的重复值,但保留其余项的顺序。使用通用发电机功能。

# for hashable sequence
def remove_duplicates(items):
    seen = set()
    for item in items:
        if item not in seen:
            yield item
            seen.add(item)

a = [1, 5, 2, 1, 9, 1, 5, 10]
list(remove_duplicates(a))
# [1, 5, 2, 9, 10]



# for unhashable sequence
def remove_duplicates(items, key=None):
    seen = set()
    for item in items:
        val = item if key is None else key(item)
        if val not in seen:
            yield item
            seen.add(val)

a = [ {'x': 1, 'y': 2}, {'x': 1, 'y': 3}, {'x': 1, 'y': 2}, {'x': 2, 'y': 4}]
list(remove_duplicates(a, key=lambda d: (d['x'],d['y'])))
# [{'x': 1, 'y': 2}, {'x': 1, 'y': 3}, {'x': 2, 'y': 4}]
from itertools import groupby
[ key for key,_ in groupby(sortedList)]

这个列表甚至不需要排序,充分条件是相等的值被分组在一起。

编辑:我假设“保持顺序”意味着列表实际上是有序的。如果不是这样,那么MizardX的解决方案是正确的。

社区编辑:然而,这是“将重复的连续元素压缩为单个元素”的最优雅的方法。

对于另一个非常古老的问题的一个非常晚的回答:

itertools食谱有一个函数可以做到这一点,使用了见集技术,但是:

处理标准键函数。 不使用不体面的黑客。 通过预绑定优化循环。加,而不是查N次。(f7也这样做,但有些版本没有。) 通过使用ifilterfalse优化循环,因此只需遍历Python中唯一的元素,而不是所有元素。(当然,您仍然在ifilterfalse中遍历所有它们,但这是在C中,而且要快得多。)

Is it actually faster than f7? It depends on your data, so you'll have to test it and see. If you want a list in the end, f7 uses a listcomp, and there's no way to do that here. (You can directly append instead of yielding, or you can feed the generator into the list function, but neither one can be as fast as the LIST_APPEND inside a listcomp.) At any rate, usually, squeezing out a few microseconds is not going to be as important as having an easily-understandable, reusable, already-written function that doesn't require DSU when you want to decorate.

和所有的食谱一样,它也有更多的版本。

如果你只想要无键的情况,你可以简化为:

def unique(iterable):
    seen = set()
    seen_add = seen.add
    for element in itertools.ifilterfalse(seen.__contains__, iterable):
        seen_add(element)
        yield element

你可以引用一个列表推导式,因为它是由符号'_[1]'构建的。例如,下面的函数通过引用列表推导式对元素列表进行惟一化,而不改变它们的顺序。

def unique(my_list): 
    return [x for x in my_list if x not in locals()['_[1]']]

演示:

l1 = [1, 2, 3, 4, 1, 2, 3, 4, 5]
l2 = [x for x in l1 if x not in locals()['_[1]']]
print l2

输出:

[1, 2, 3, 4, 5]