Is there a technical reason for implicitly converting from DBNull to nullable types?

Is there a technical reason why there is no implicit conversion from DBNull to various NULL and / or sql types? I understand why the transformations do not currently occur, but I do not understand why the implicit transform was not created at that time or added in subsequent versions of the framework.

Just to be clear, I'm looking for technical reasons, not "because they are the way they did it," or "I like it."

+6
source share
2 answers

Well, I do not know about the SqlTypes case, but there are certain technical reasons why adding an implicit conversion between DBNull.Value and Nullable<T> values ​​with HasValue = false will not work.

Remember that DBNull is a reference type, and although Nullable<T> acts like a reference type - pretending to be null is actually a value type, with semantics of value.

In particular, there is a strange extreme case when values ​​of type Nullable<T> are put into a box. The behavior is specified in the runtime to enter values ​​of type Nullable<T> in the boxed version of T , and not in the box version of Nullable<T> .

How the MSDN documentation explains this:

When a type with a null type is placed in a box, the common language runtime automatically puts the base value of the Nullable (Of T) object, not the Nullable (Of T) object. That is, if the HasValue property is true, the contents of the Value property are put in a square. When the base value of the NULL type is unpacked, the common language runtime creates a new Nullable (Of T) structure, initialized with the base value.

If the HasValue property of type nullable is false, the result of the box operation is Nothing. Therefore, if a type with a null number in a box is passed to a method that expects an object argument, this method must be prepared to handle the case when the argument is Nothing. When Nothing is not decompressed into a type with a null value, the common language runtime creates a new Nullable (Of T) structure and initializes the HasValue property to false.

Now we are faced with a difficult problem: the C # language specification (& sect; 4.3.2) says that we cannot use the unboxing transform to convert DBNull.Value to Nullable<T> :

In order for the conversion of unboxing to the specified type to be NULL in order to succeed at run time, the value of the source operand must be either null or a reference to the value in the cells of the base type with a null value of NULL -type. If the source operand is a reference to an incompatible object, a System.InvalidCastException is thrown.

And we cannot use a user-defined conversion to convert from object to Nullable<T> either according to & sect; 10.10.3:

It is not possible to directly override a predefined transformation. Thus, conversion operators cannot convert from or to object , because implicit and explicit conversions already exist between object and all other types.

OK, you or I couldn’t do this, but Microsoft could just change the specification and make it legal, right? I do not think so.

Why? So imagine the alleged use case: you have a method that is specified to return an object . In practice, it returns DBNull.Value or int . But how could the compiler know this? All he knows is that the method is specified to return an object . The conversion operator to be used must be selected at compile time.

OK, suppose there is some kind of magic operator that can convert from object to Nullable<T> , and the compiler has some way of knowing when it is applicable. (We don’t want it to be used for every method that is specified to return an object — what should it do if the method really returns a string ?). But we still have a problem: the transformation can be mixed! If the method returns either long or DBNull.Value , and we do int? v = Method(); int? v = Method(); what should we do when the method returns a long box?

Basically, to make this work as intended, you will need to use the dynamic equivalent to determine the type at runtime and convert based on the type of runtime. But then we broke another rule: since the actual conversion will be selected only at runtime, there is no guarantee that it will really be successful. But implicit conversions should not throw exceptions.

Thus, at the moment this is not only a change in the specified language behavior, potentially significant performance, and, in addition, it can cause an unexpected exception! This seems like a pretty good reason not to implement it. But if you need one more reason, remember that each function starts minus 100 points .

In short: what you really want here cannot be done with implicit conversion anyway. If you want dynamic behavior, just use dynamic ! This does what you want and is already implemented in C # 4.0:

 object x = 23; object y = null; dynamic dx = x; dynamic dy = y; int? nx = (int?) dx; int? ny = (int?) dy; Console.WriteLine("nx.HasValue = {0}; nx.Value = {1}; ny.HasValue = {2};", nx.HasValue, nx.Value, ny.HasValue); 
+3
source

The simplest explanation comes from the Microsoft Documentation :

CLR null types are not designed to store null database values ​​because ANSI SQL null does not behave the same as a null reference (or Nothing in Visual Basic).

+7
source

Source: https://habr.com/ru/post/905580/


All Articles