Is there a way to speed up this operation if I have huge lists?
You can postpone the creation of a list for results until you count:
List<double?> list1 = new List<double?>(); List<double?> list2 = new List<double?>(); int recordCount = list1.Count > list2.Count ? list2.Count : list1.Count; List<double?> listResult = new List<double?>(recordCount);
This will allow you to specify the exact capacity needed for the results and avoid redistribution within the list itself. For βhuge lists,β this is probably one of the slowest parts, since allocating memory and copy as the list becomes large will be the slowest operation here.
In addition, if the calculation is simple, you can use several cores:
List<double?> list1 = new List<double?>(); List<double?> list2 = new List<double?>(); int recordCount = list1.Count > list2.Count ? list2.Count : list1.Count; var results = new double?[recordCount];
Given that the βworkβ here is so simple, you probably need a custom separator to make the most of parallelism (see How to speed up work with a small Body loop ):
var results = new double?[recordCount]; // Use an array here var rangePartitioner = Partitioner.Create(0, recordCount); Parallel.ForEach(rangePartitioner, range => { for (int index = range.Item1; index < range.Item2; index++) { results[index] = list1[index] + list2[index]; } });
If this is not a bottleneck, you can use LINQ to do this as a single line:
var results = list1.Zip(list2, (one, two) => one + two).ToList();
However, this will be (very slightly) less efficient than loop processing if performance is really a bottleneck.