If your use case means that the elements of the collection must be unique, then you should use that data structure to enforce this.
By doing this, you not only avoid having to write an O (N) search method to check for duplicates, but you can also bubble up a pre-existing duplicate exception key that a collection of this kind will throw.
However, .NET does not have a separate collection that maintains the sort order, although it is very simple to expand the list to support this.
The approach I used below sacrifices memory for speed, storing unique values ββin the second HashSet. If the memory size was more important, you just need to do an O (N) check for each add operation. Since the methods are not virtual (for some reason) in the List, I led to hiding the base methods using the new keyword.
Please note that this is just an example, not thread-safe, and should probably not be used in real production applications.
public class UniqueList<T> : List<T> { private HashSet<T> _internalHash = new HashSet<T>(); public UniqueList() : base() { } public UniqueList(IEnumerable<T> collection) : base(collection) { } public UniqueList(int capacity) : base(capacity) { } public new void Add(T item) { if (!_internalHash.Add(item)) throw new ArgumentException("Item already exists in UniqueList"); base.Add(item); } public new void AddRange(IEnumerable<T> collection) { foreach (T t in collection) { this.Add(t); } } public new bool Remove(T item) { _internalHash.Remove(item); return base.Remove(item); } public new int RemoveAll(Predicate<T> match) { int removedElems = 0; foreach (T item in this) { if (match(item)) { this.Remove(item); removedElems++; } } return removedElems; } public new void RemoveAt(int index) { this.Remove(this[index]); } public new void RemoveRange(int index, int count) { for (int i = index; i < count; i++) { this.Remove(this[i]); } } }
source share