Not sure what exactly you mean by saying: "It would be better to have something that scales with the size of the array." Of course, you cannot add additional additional columns to the result set, because the size of the internal array is growing, because postgresql needs to know the exact request codes before executing it (so before it starts reading the line).
But I would like to suggest converting the row to the normal relational representation of the matrix:
select i, j, arr[i][j] a_i_j from ( select i, generate_subscripts(arr,2) as j, arr from ( select generate_subscripts(arr,1) as i, arr from (select ('{{1,"a",11},{2,"b",22},{3,"c",33},{4,"d",44}}'::text[][]) arr) input ) sub_i ) sub_j
What gives:
i | j | a_i_j --+---+------ 1 | 1 | 1 1 | 2 | a 1 | 3 | 11 2 | 1 | 2 2 | 2 | b 2 | 3 | 22 3 | 1 | 3 3 | 2 | c 3 | 3 | 33 4 | 1 | 4 4 | 2 | d 4 | 3 | 44
Such a result can be quite useful in further data processing, I think.
Of course, only an array with a predetermined number of dimensions can process such a request, but all array sizes for all its dimensions can be changed without overwriting the request, so this is a slightly more flexible approach.
ADDITIONAL: Yes, using with recursive , you can build a similar query that can process an array with arbitrary sizes. However, there is no way to overcome the limitation that comes from the relational data model — the exact set of columns must be determined at the time of query parsing and cannot delay this until execution time. Thus, we are forced to store all indexes in one column using another array.
Here is a query that retrieves all elements from an arbitrary multidimensional array along with its zero indexes (stored in another one-dimensional array):
with recursive extract_index(k,idx,elem,arr,n) as ( select (row_number() over())-1 k, idx, elem, arr, n from ( select array[]::bigint[] idx, unnest(arr) elem, arr, array_ndims(arr) n from ( select '{{{1,"a"},{11,111}},{{2,"b"},{22,222}},{{3,"c"},{33,333}},{{4,"d"},{44,444}}}'::text[] arr ) input ) plain_indexed union all select k/array_length(arr,n)::bigint k, array_prepend(k%array_length(arr,2),idx) idx, elem, arr, n-1 n from extract_index where n!=1 ) select array_prepend(k,idx) idx, elem from extract_index where n=1
What gives:
idx | elem --------+----- {0,0,0} | 1 {0,0,1} | a {0,1,0} | 11 {0,1,1} | 111 {1,0,0} | 2 {1,0,1} | b {1,1,0} | 22 {1,1,1} | 222 {2,0,0} | 3 {2,0,1} | c {2,1,0} | 33 {2,1,1} | 333 {3,0,0} | 4 {3,0,1} | d {3,1,0} | 44 {3,1,1} | 444
Formally, this seems to prove the concept, but I wonder what real practical application could be made of it :)