Say I have a type called T. Now suppose I make an array of the type Tit gives me T[]. In code, this gives:
var myArray = new T[10];
With a length of 10. So, we see that this makes an array containing 10 type elements T. This works if T int, string, BinaryWriteror whatever the standard. But let Tis an array type, for example, int[]or string[].
Then, if we want to define an array of 10 elements of type T( int[]), it should give something like this:
var myArray = new int[][10];
Replacing Twith int[]in the previous example. But this gives a syntax error, because the correct syntax in C # is:
var myArray = new int[10][];
What if we followed the logic of the first example, should provide an array containing the number of undefined arrays containing 10 integers.
The same applies to any size of notched arrays:
var myArray = new int[][][10];
Wrong, because the syntactically correct way is:
var myArray = new int[10][][];
This is not a personal preference or discussion in code style, but just logic: why is the syntax different when defining arrays of array types than when we define arrays of something else?