I used the source on this page as an example, then I duplicated the content 8 times, resulting in a page of 334,312 bytes. Using StringComparision.Ordinal gives a huge difference in performance.
string newInput = string.Format("{0}{0}{0}{0}{0}{0}{0}{0}", input.Trim().ToLower()); //string newInput = input.Trim().ToLower(); System.Diagnostics.Stopwatch sw = new System.Diagnostics.Stopwatch(); sw.Start(); int[] _exclStart = new int[100]; int[] _exclStop = new int[100]; int _excl = 0; for (int f = newInput.IndexOf("<script", 0, StringComparison.Ordinal); f != -1; ) { _exclStart[_excl] = f; f = newInput.IndexOf("</script", f + 8, StringComparison.Ordinal); if (f == -1) { _exclStop[_excl] = newInput.Length; break; } _exclStop[_excl] = f; f = newInput.IndexOf("<script", f + 8, StringComparison.Ordinal); ++_excl; } sw.Stop(); Console.WriteLine(sw.Elapsed.TotalMilliseconds);
works 5 times, gives almost the same result for each (cycle timings have not changed significantly, so for this simple code there is almost no time spent compiling JIT)
Output using source code (in Milliseconds ):
10.2786 11.4671 11.1066 10.6537 10.0723
Output using the above code (in Milliseconds ):
0.3055 0.2953 0.2972 0.3112 0.3347
Please note that my test results are approximately 0.010 seconds (source code) and 0.0003 seconds (for ordinal code). This means that you have something else wrong besides this code.
If, as you say, using StringComparison.Ordinal does nothing for your performance, it means that you either use the wrong timers in the time of your performance, or have big overhead when reading the input value, for example, reading it from the stream that you otherwise you don’t understand.
Tested under Windows 7 x64, running on 3GHz i5, using the .NET Client Client.
Suggestions:
- use
StringComparison.Ordinal - Make sure you use
System.Diagnostics.Stopwatch to ensure time performance - Declare a local variable for
input instead of using values external to the function (for example: string newInput = input.Trim().ToLower(); )
Again, I emphasize that I am getting speed 50 times faster for test data, which is apparently 4 times larger in size, using the same code that you provide. This means that my test runs 200 times faster than yours, and this is not what someone would expect if we both worked in the same environment and only i5 (me) versus i7 (you).