You really can't do much, except what users are likely to do. You are in a good position for the SQL Server optimizer to do the hard work for you (assume this is a build in the keystore)!
I would create indexes for the most likely columns to be filtered or sorted. You should try to filter these indices for non-zero values, which will reduce the cost of storage (assuming that users will not filter zero values).
You can also try to precompile common joins and aggregations using indexed views. If you want to drop insane amounts of RAM on this issue and are ready to have slow records, you can index and materialize hell from this database.
Finally, you can offload user requests in the target read-only log delivery or the like. This will cause their terrible requests.
For your queries, you need to perform d-parameterization, but you do not need to cache them in all cases. If your queries are expensive (therefore, compilation time is not significant), you will want to run them using OPTION RECOMPILE so that SQL Server can adapt to the exact runtime values โโof all parameters.
You must also keep track of all requests and view them in order to search for patterns. Your users are likely to run very similar queries all the time. Index for them.
Run sp_updatestats regularly.
Finally, I want to say that there is no very effective solution to this, because if SQL Server implemented them itself, so that everyone could benefit.
source share