Why can the type_trait specialization lead to undefined behavior?

Discussion

According to the standard ยง20.10.2 / 1 Header <type_traits> synopsis [meta.type.synop]:

1 behavior of a program that adds specializations to any of the class templates defined in this subclause is undefined unless otherwise specified.

This particular sentence contradicts the general concept that STL should be expandable and does not allow us to extend type traits, as in the example below:

 namespace std { template< class T > struct is_floating_point<std::complex<T>> : std::integral_constant < bool, std::is_same<float, typename std::remove_cv<T>::type>::value || std::is_same<double, typename std::remove_cv<T>::type>::value || std::is_same<long double, typename std::remove_cv<T>::type>::value > {}; } 

Live demo

where std::is_floating_point expands to handle a complex number with a base floating point type.

Questions

  • What are the reasons why the standardization committee decided that type characteristics should not be specialized.
  • Are there any future plans for this restriction to be lifted.
+5
source share
2 answers

For categories of the primary type, which is_floating_point equal to one, there is a constructive invariant:

For any given type T exactly one of the categories of the primary type has a value member that evaluates to true .

Link: (20.10.4.1 Primary Type Categories [meta.unary.cat])

Programmers can rely on this invariant in the general code when checking an unknown type T : Ie if is_class<T>::value true , then we do not need to check is_floating_point<T>::value . We are guaranteed that the last is false .

Here is a chart representing the main and composite types of features (the sheets at the top of this chart are the main categories).

http://howardhinnant.imtqy.com/TypeHiearchy.pdf

If it was allowed to have (for example) std::complex<double> true for both is_class and is_floating_point , this useful invariant will be violated. Programmers will no longer be able to rely on the fact that if is_floating_point<T>::value == true , then T must be one of float , double or long double .

Now there are some features where the standard "says otherwise", and custom types are allowed to specialize. common_type<T, U> is such a trait.

For primary and composite characteristic types, there are no plans to mitigate the specialization restrictions of these characteristics. This may compromise the ability of these features to accurately and unambiguously classify each individual type that can be generated in C ++.

+10
source

Adding Howard to the answer (example).

If users were allowed to specialize characteristic types, they could lie (intentionally or by mistake), and the standard library could no longer guarantee that its behavior was correct.

For example, when an object of type std::vector<T> copied, the optimization used by popular implementations calls std::memcpy to copy all the elements, provided that T trivially copied. They can use std::is_trivially_copy_constructible<T> to determine if optimization is safe. If not, the implementation reverts to a safer, but slower method, which iterates over the elements and calls the T copy constructor.

Now, if you specialize std::is_trivially_copy_constructible for T = std::shared_ptr<my_type> as follows:

 namespace std { template <> class is_trivially_copy_constructible<std::shared_ptr<my_type>> : std::true_type { }; } 

Then copying a std::vector<std::shared_ptr<my_type>> will be disastrous.

This will not be a mistake in implementing the standard library, but rather a writer of specialization. To some extent, the quote provided by the OP says, "It's your fault, not mine."

+2
source

Source: https://habr.com/ru/post/1200500/


All Articles