Python list (set (list (...)) for duplicate removal

there is

list(set(some_list))

good way to remove duplicates from a list? (Python 3.3, if that matters)

(Edited to address some comments ... perhaps this was too brief before).

In particular,

  • it is at least comparable in terms of efficiency (mainly speed, but also memory), if not better than its own algorithm; this is clearly the most concise code
  • Is he reliable? In what situations does it break? (already mentioned ... list items must be hashable)
  • is there a more pythonic way to do this?
+4
source share
3 answers

, , , ; Pythonic .

, collections.OrderedDict set:

list(collections.OrderedDict((k, None) for k in some_list).keys())

, , itertools.groupby :

list(k for k,g in itertools.groupby(sorted(some_list)))
+4

( , .)

Pythonic . Numpy, new_list = numpy.unique(some_list). " ", , , , - "Pythonic".

+3

( Python 2.7):

>>> from collections import OrderedDict
>>> list(OrderedDict.fromkeys('abracadabra'))
['a', 'b', 'r', 'c', 'd']

list(set(...)), .

+1

Source: https://habr.com/ru/post/1609745/


All Articles