Where I work, we create and distribute a library and a couple of complex programs built on this library. All code is written in C and is available on most "standard" systems, such as Windows, Linux, Aix, Solaris, Darwin.
I started working in the QA department and during the execution of tests I have been reminded several times lately that I need to remember to set file descriptor limits and default stack sizes higher or something bad. This is especially true for Solaris and now Darwin.
Now this is very strange for me, because I believe that in a zero environment work is required to make the product work. Therefore, I wonder if there are times when such a requirement is a necessary evil, or if we are doing something wrong.
Edit:
Great comments that describe the problem and a bit of background. However, I do not believe that I correctly formulated this question. We currently require customers, and therefore testers, to set these limits before running our code. We do not do this programmatically. And this is not the situation when they CAN WIN, under normal load, our programs will end and disappear. Therefore, when rewriting the question, is it required that the client change these ulimit values in order to run our software, which is expected on some platforms, that is, Solaris, Aix, or we, as a company, making it difficult for these users to migrate?
Bounty: I added generosity to hope to get a little more information about what other companies are doing to manage these restrictions. Can you install them pragmatically? Should we? Should our programs even fall within these limits, or could this be a sign that things might be a little dirty under the covers? This is really what I want to know, because the perfectionist, apparently the dirty program really bothers me.