What was the motivation for adding the IPV6_V6ONLY flag?

On an IPv6 network, the IPV6_V6ONLY flag is used to ensure that the socket will only use IPv6, and in particular, IPv4-IPv6 mapping will not be used for this socket. IPV6_V6ONLY is not installed by default on many OSs, but on some OSs (for example, Windows 7) it is installed by default.

My question is: What was the motivation for introducing this flag? Something related to the IPv4-to-IPv6 mapping, which was causing problems, and therefore people need a way to turn it off? It seemed to me that if someone does not want to use IPv4-to-IPv6 mapping, they may simply not specify the IPv6 address of the IPv4-mapped IPv6. What am I missing here?

+22
networking ipv6
Apr 22 '10 at 19:11
source share
7 answers

I do not know why this would be the default; but this is the type of flags that I always set explicit, no matter what the default is.

About why it exists in the first place, I assume that it allows you to save existing servers only for IPv4 and just start new ones in the same port, but only for IPv6 connections. Or maybe the new server can simply proxy clients to the old one, which makes IPv6 work simple and painless to add to old services.

+3
Apr 22 '10 at 19:32
source share

Not all IPv6-compatible platforms support dual-glass sockets, so the question is, how do applications requiring maximization of IPv6 compatibility either know that dualstack is supported or bundled separately when it is not? The only universal answer is IPV6_V6ONLY.

An application that ignores IPV6_V6ONLY or is written before stacks of IP-compatible stacks can find binding to V4 errors separately in a dual-start environment, since the IPv6 dual-glass socket is bound to IPv4, which prevents IPv4 socket binding. An application also cannot expect IPv4 over IPv6 due to addressing problems at the protocol level or application level or IP access controls.

This or similar situations most likely prompted MS et al to default 1, even RFC3493 declares 0 to be the default. 1 theoretically maximizes backward compatibility. In particular, Windows XP / 2003 does not support dualstack sockets.

There is also no shortage of applications, which, unfortunately, must transmit lower-level information for proper operation, therefore this parameter can be very useful for planning an IPv4 / IPv6 compatibility strategy that best suits the requirements and existing codes.

+10
May 10 '10 at 16:41
source share

The reason that is most often mentioned is when the server has some form of ACL (Access Control List). For example, imagine a server with rules such as:

Allow 192.0.2.4 Deny all 

It works on IPv4. Now someone starts it on a machine with IPv6 and, depending on some parameters, IPv4 requests are received on the IPv6 socket, displayed as :: 192.0.2.4, and then are no longer mapped to the first ACL. Suddenly access will be denied.

Explicit in your application (using IPV6_V6ONLY) will solve the problem, regardless of what the operating system is by default.

+4
May 09 '10 at 17:04
source share

For Linux, when writing a service that listens for both IPv4 and IPv6 sockets on the service port, the same , for example. port 2001, you MUST call setsockopt (s, SOL_IPV6, IPV6_V6ONLY, and one, sizeof (one)); on the IPv6 connector. If you do not, the bind () operation for the IPv4 socket will end with "Address already in use."

+2
Mar 28 '13 at 16:16
source share

There are plausible ways in which (poorly named) IPv4 addresses can be used to bypass poorly configured systems or bad stacks, or even in a well configured system only an excessive amount of errors can be required. A developer may want to use this flag to make their application more secure without using this part of the API.

See: http://ipv6samurais.com/ipv6samurais/openbsd-audit/draft-cmetz-v6ops-v4mapped-api-harmful-01.txt

+1
Apr 25 '12 at 0:50
source share

Imagine a protocol that includes a network address in a conversation, for example. data transfer channel for FTP. When using IPv6, you are going to send an IPv6 address, if the recipient is an IPv4 address, he will not be able to connect to this address.

0
May 02 '10 at 10:18
source share

There is one very common example where duality of behavior is a problem. The standard call to getaddrinfo() with the AI_PASSIVE flag offers the ability to pass the nodename parameter and returns a list of addresses to listen to. A special value in the form of a NULL string is accepted for nodename and implies listening for wildcard addresses.

On some systems, 0.0.0.0 and :: returned in this order. If the dual-core socket is installed by default, and you do not install the IPV6_V6ONLY socket, the server connects to 0.0.0.0 and then cannot connect to the dual stack :: , and therefore (1) works only on IPv4 and (2) reports an error.

I think the order is wrong, since IPv6 is expected to be preferable. But even when you first try to double stack :: and then only IPv4 0.0.0.0 , the server still reports an error for the second call.

I personally consider the whole idea of ​​a dual stack socket an error. In my project, I would prefer to always explicitly set IPV6_V6ONLY to avoid this. Some people apparently considered this a good idea, but in this case I would probably explicitly disable IPV6_V6ONLY and translate NULL directly to 0.0.0.0 , bypassing the getaddrinfo() mechanism.

0
Oct 12 '15 at 10:25
source share



All Articles