How to derive coercion?

I would like to know how to infer coercion (implicit aka conversions) during type inference. I use the type inference scheme described in Bastiaan Heeren's Error Messages at the Top Quality Level, but I would suggest that the general idea is probably in all Hindley-Milner-Esc approaches.

It seems that coercion can be considered as a form of congestion, but the congestion approach described in this article does not consider (at least not the way I could) congestion based on requirements that the context sets by return type, which is a prerequisite for coercion. I am also concerned that such an approach may make it difficult to prioritize coercion, as well as respect for the transitive closure of coercivity. I see how saccharification of each coercive expression, for example, e coerces (e), but saccharizes it for coercion (coercion (coercion (... coercion (e) ...))) for some depth equal to the maximum nesting of coercions stupid, and also limits the coercivity relation to something with a finite transitive closure, the depth of which does not depend on the context, which seems (optionally?) restrictive.

+4
source share
3 answers

Hope you get good answers to that.

I have not read the paper you are referencing yet, but that sounds interesting. Did you even see how ad-hoc polymorphism (mostly overload) works in Haskell? A system like Haskell is HM plus some other goodies. One of these advantages is the type classes. Type classes provide overload or, as Haskeller calls it, ad-hoc polymorphism.

In GHC, the most widely used Haskell compiler, class classes are implemented by passing dictionaries at runtime. The dictionary allows the runtime system to search from type to implementation. Presumably, jhc can use super-optimization to choose the right implementation at compile time, but I am skeptical of the completely polymorphic cases that Haskell can allow, and I don't know any formal evidence or documents to prove it is correct.

It looks like your type inference will work in the same problems as other rank-n polymorphic approaches. You might want to read some of the articles here for an additional background: Scroll down to Type Paper . His documents have a specific font, but the type of theoretical material should be meaningful and useful to you.

I think this article on rank-n polymorphism and type checking problems should trigger some interesting thoughts for you: http://research.microsoft.com/~simonpj/papers/higher-rank/ p>

I would like to give a better answer! Good luck.

+3
source

My experience is that sugaring each term intuitively seems unattractive, but worth pursuing.

The interest in persistent storage led me to a workaround to address the problems of mixing expressions and atomic values. To support this, I decided to completely separate them into a type system; this way int, char, etc. are type constructors for integer and character values ​​only. Expression types are formed using the constructor of the polymorphic type Exp; for example, Exp Int refers to a value that is reduced by one step to Int.

The relevance of this issue arises when we consider the assessment. At the basic level, there are primitives that require atomic values: COND, addInt, etc. Some people refer to this as a forced expression, I prefer to see it simply as a cast between values ​​of different types.

The challenge is to check if this can be done without requiring explicit reduction directives. One solution is exactly as you suggest: for example, consider coercion as a form of overload.

Say we have a script input: foo x

Then after sugaring it becomes: (coerce foo) (coerce x)

Where, unofficially:

 coerce :: a -> b coerce x = REDUCE (cast x) if a and b are incompatible x otherwise 

Thus, coercion is either a person or a cast application, where b is the type of return value for a given context.

Now

casting can be considered as a class type method, for example

 class Cast a, b where {cast :: a -> b }; -- Β¬:: is an operator, literally meaning: don't cast --(!) is the reduction operator. Perform one stage of reduction. -- Reduce on demand instance Cast Exp c, c where { inline cast = Β¬::(!)(\x::(Exp c) -> Β¬::(!)x) }; 

Annotations Β¬:: are used to suppress syntactic sugar with duress.

It is suggested that other Cast instances may be introduced to expand the range of transformations, although I have not studied this. As you say, overlapping instances seem necessary.

+1
source

Could you give a little more clarification as to what exactly you are asking?

I have a small idea, and if my idea is correct, then this answer should be sufficient, as my answer. I believe that you are talking about this from the point of view of the person who creates the language, in which case you can look at a language such as ActionScript 3 for example. In AS3, you can output two different ways: 1) a NewType object (object) or 2) as a new type.

From an implementation point of view, I create each class to define its own conversion methods depending on what types it can convert (an array cannot really convert to an integer ... or can it?). For example, if you try Integer (myArrayObject), and myArrayObject does not determine how to convert to Integer, you can either throw an exception or let it be and just pass the original object, not passed.

My whole answer may be completely disabled: -D Let me know if this is not what you are looking for

0
source

Source: https://habr.com/ru/post/1276730/


All Articles