F # supports "slice expressions", which, for example, expressions such as myArray.[3 .. 5] are allowed for the usual one-dimensional myArray array myArray.[3 .. 5] . According to, for example, the F # 4.0 language specification (section 6.4.7), this is implemented by calling the GetSlice method after the corresponding parameter conversion. This also works for multidimensional arrays. However, I am having trouble defining an interface that implements this in the two-dimensional case.
I did the following. I defined the interface as follows:
type IMatrix = abstract member GetSlice : ?start1:int * ?end1:int * ?start2:int * ?end2:int -> IMatrix abstract member GetSlice : idx1:int * ?end1:int * ?start2:int * ?end2:int -> IMatrix abstract member GetSlice : ?start1:int * ?end1:int * idx2:int -> IMatrix
This is based on instructions, as I understood them from section 6.4.7 of the specification. The idea is that when I then have an IMatrix named matrix , I should be able to write
matrix.[1 .. 2, 3 .. 4]
and get a submatrix of type IMatrix . The idea is that essentially, 1 .. 2 converted to Some 1, Some 2 , and 3 .. 4 converted to Some 3, Some 4 compiler, and the four-parameter GetSlice method is provided to these four types of parameters.
However, when I try this in practice, the compiler reports errors that do not correspond to overloads for the GetSlice method, in particular, that the type 'int' is incompatible with the type 'int option'. It seems to me that the compiler correctly indicates that the concept of a slice should be converted to a GetSlice call, but for some reason the arguments are mixed.
As an aside, I get the same problem if I try this when implementing one-dimensional slicing, for example. IVector interface or when trying to execute it in a class instead of an interface.
How to fix it?