Incorrect specialized function generated in Swift 3 by indirect call

I have code that follows the general construction:

protocol DispatchType {} class DispatchType1: DispatchType {} class DispatchType2: DispatchType {} func doBar<D:DispatchType>(value:D) { print("general function called") } func doBar(value:DispatchType1) { print("DispatchType1 called") } func doBar(value:DispatchType2) { print("DispatchType2 called") } 

where DispatchType actually the backend repository. doBar functions are optimized methods that depend on the correct storage type. Everything works fine if I do this:

 let d1 = DispatchType1() let d2 = DispatchType2() doBar(value: d1) // "DispatchType1 called" doBar(value: d2) // "DispatchType2 called" 

However, if I create a function that calls doBar :

 func test<D:DispatchType>(value:D) { doBar(value: value) } 

and I try a similar call pattern, I get:

 test(value: d1) // "general function called" test(value: d2) // "general function called" 

This is similar to what Swift must handle, as it must define type constraints at compile time. Just like a quick test, I also tried writing doBar as:

 func doBar<D:DispatchType>(value:D) where D:DispatchType1 { print("DispatchType1 called") } func doBar<D:DispatchType>(value:D) where D:DispatchType2 { print("DispatchType2 called") } 

but get the same results.

Any ideas if this is the correct Swift behavior, and if so, is there a good way to get around this behavior?

Change 1 . An example of why I tried to avoid using protocols. Suppose I have code (very simplified from my actual code):

 protocol Storage { // ... } class Tensor<S:Storage> { // ... } 

For the Tensor class, I have a basic set of operations that can be performed on Tensor s. However, the operations themselves will change their behavior based on the repository. I am currently doing this with:

 func dot<S:Storage>(_ lhs:Tensor<S>, _ rhs:Tensor<S>) -> Tensor<S> { ... } 

For now, I can put them in the Tensor class and use the extensions:

 extension Tensor where S:CBlasStorage { func dot(_ tensor:Tensor<S>) -> Tensor<S> { // ... } } 

I have a few side effects that I don't like:

  • I think dot(lhs, rhs) preferable to lhs.dot(rhs) . Convenience functions can be written to get around this, but this will create a huge burst of code.

  • This will cause the Tensor class to become monolithic. I really prefer it to contain the minimum amount of code needed and extend its functionality with helper functions.

  • Associated with (2), this means that anyone who wants to add new functionality will have to touch the base class, which I consider a bad design.

Change 2 . One option is that everything works if you use restrictions on everything:

 func test<D:DispatchType>(value:D) where D:DispatchType1 { doBar(value: value) } func test<D:DispatchType>(value:D) where D:DispatchType2 { doBar(value: value) } 

will cause the correct doBar to be doBar . This is also not ideal, as it will cause a lot of additional code to be written, but at least allows me to keep my current design.

Change 3 . I came across documentation that shows the use of the static using generics. This helps, at least with point (1):

 class Tensor<S:Storage> { // ... static func cos(_ tensor:Tensor<S>) -> Tensor<S> { // ... } } 

allows you to write:

 let result = Tensor.cos(value) 

and supports operator overloading:

 let result = value1 + value2 

he has the added verbosity of the required Tensor . This may improve a bit:

 typealias T<S:Storage> = Tensor<S> 
+5
source share
1 answer

This is indeed the correct behavior since overload resolution occurs at compile time (it would be a rather expensive operation at runtime). Therefore, from inside test(value:) only thing the compiler knows about value is that it has some type corresponding to DispatchType - thus, the only overload it can send is func doBar<D : DispatchType>(value: D)

Everything would be different if the common functions were always specialized by the compiler, because then the specialized implementation of test(value:) would know the specific type of value and, therefore, would be able to choose the appropriate overload. However, specialization of common functions is currently only an optimization (since without inlay, it can add significant bloat to your code), so this does not change the observed behavior.

One solution that allows polymorphism is to use the protocol witness table (see this great WWDC talk on them), adding doBar() as a protocol requirement and implementing specialized implementations in the appropriate classes corresponding to the protocol, with the overall implementation being part of the protocol extension .

This will allow you to dynamically send doBar() , which allows it to be called from test(value:) and have the correct implementation.

 protocol DispatchType { func doBar() } extension DispatchType { func doBar() { print("general function called") } } class DispatchType1: DispatchType { func doBar() { print("DispatchType1 called") } } class DispatchType2: DispatchType { func doBar() { print("DispatchType2 called") } } func test<D : DispatchType>(value: D) { value.doBar() } let d1 = DispatchType1() let d2 = DispatchType2() test(value: d1) // "DispatchType1 called" test(value: d2) // "DispatchType2 called" 
+7
source

Source: https://habr.com/ru/post/1263669/


All Articles