Not all IPv6-compatible platforms support dual-glass sockets, so the question is, how do applications requiring maximization of IPv6 compatibility either know that dualstack is supported or bundled separately when it is not? The only universal answer is IPV6_V6ONLY.
An application that ignores IPV6_V6ONLY or is written before stacks of IP-compatible stacks can find binding to V4 errors separately in a dual-start environment, since the IPv6 dual-glass socket is bound to IPv4, which prevents IPv4 socket binding. An application also cannot expect IPv4 over IPv6 due to addressing problems at the protocol level or application level or IP access controls.
This or similar situations most likely prompted MS et al to default 1, even RFC3493 declares 0 to be the default. 1 theoretically maximizes backward compatibility. In particular, Windows XP / 2003 does not support dualstack sockets.
There is also no shortage of applications, which, unfortunately, must transmit lower-level information for proper operation, therefore this parameter can be very useful for planning an IPv4 / IPv6 compatibility strategy that best suits the requirements and existing codes.
Einstein May 10 '10 at 16:41 2010-05-10 16:41
source share