The reason for using include columns in a non-clustered index is to avoid bookmark search in clustered data. The fact is that if SQL Server can theoretically use a specific non-clustered index, but Optimiser estimates will be “too many” bookmark searches, then the specified index will be ignored. However, if all selected columns are accessible directly from the index, there is no need to search by bookmarks.
In your case, the fact that you are accessing data through a "clustered index search" is very important. It will be very difficult to improve its performance. A non-clustered index, including all selected columns, may be slightly faster, but only because the raw data is slightly smaller. ( But don't forget the cost of increased insert / update time .)
However, you should check the details ...
- If you use a composite key, and the search is really only at the beginning of the key, you might be out of luck. You can find that the search only narrows to 500,000 lines and then searches for it based on other criteria. In this case, experiment with some non-clustered indexes.
- Finding a clustered index can be great; but if this is done 100,000 times in your query, because some other aspect inefficiently returns too many rows, then you won’t get much, improving the performance of the clustered index search.
Finally, in order to clarify the comment recently, "cost is relative." Just because clustering is 77% of your request cost does not mean that there is a problem. You can write a trivial 1 table query that returns a sinlge string and the cost of searching for a clustered index is 100%. (But, of course, being the only "job", it will be 100% work ... and 100% instantaneous is still instantaneous.
So: "Don't worry, be happy!"
source share