I came across a peculiar difference between Solaris 10 sockets and other Linux / * NIX sockets. Example:
int temp1, rc;
temp1 = 16*1024*1024;
rc = setsockopt( sd, SOL_SOCKET, SO_RCVBUF, &temp1, sizeof(temp1);
In the above code will be rc == 0for all systems - Linux, HP-UX and AIX - in addition to Solaris 10. Other systems silently trim setpoint to the maximum allowed. Solaris 10 rightfully fails with a errno == ENOBUFSpointing out configuration error.
After some discussion, it was decided that since a particular application is critical and not unsuccessful, it should continue to work as gracefully as possible:
- Issue a warning in the log file about the configuration inconsistency (easy, add verification using
getsockopt()) and - Try setting the maximum buffer size (to get any possible performance).
# 2 is what I'm stuck with. On all systems other than Solaris, I don’t need to do anything: sockets already do this for me.
But on Solaris, I lose what to do. I performed some trivial dichotomy around the condition (setsockopt(...) == -1 && errno == ENOBUFS)to find the maximum buffer size, but it looks ugly. (I have no context for saving the search results: the search must be repeated for each connection with such a bad configuration. Global variables are problematic, because the code is inside a shared library and used from an MT application.)
Is there a better way for Solaris 10 to determine the maximum allowable buffer size using the socket API?
API Solaris , ?