Well, the new information that this is a dense array of homogeneous numeric (double) values โโand queries is important (i.e., I will ignore de-normalization in blobs / XML and using special UDFs), I suggest the following:
Divide each result into several records, where each record has the form:
ID, SEGMENT, IDx ... // where x is [0, q]
The q value is arbitrary, but should be chosen based on the specific database implementation (for example, try to fit the 8k record size in SQL Server) for performance / efficiency reasons.
Each result will be split into records so that SEGMENT refers to a segment. That is, the "absolute index" of this function is n = SEGMENT * q + x , and the function n will be found in the record, where SEGMENT = n / q . It follows that the Primary Key (ID, SEGMENT) .
Thus, the query is still simple - the only change is the conversion to / from the segment - the only additional requirement is SEGMENT (this column can also participate in the index).
(A separate table can be used to map functions to SEGMENT/x or otherwise. Thus, it is similar to the EAV model.)
Thus, although it is similar in some way to the fully normalized form, it uses the packed / homogeneous / static nature of the original matrix to significantly reduce the number of records - while 2 million records are perhaps a small table and 20 million records - this is only " average "table, 200 million records (of 200 chips x 1 million functions per chip, if each function leads to a record) begins to become complicated. At the same time, a q out of 200 will reduce the number of records to only 10 million. (Each compact record is also much more efficient in terms of data / structure ratio.)
Happy coding.
Although the one โwhat ifโ suggestion above is for my part, I would recommend a more detailed study of the problem - in particular, the exact data patterns required. I'm not sure if this is the โtypicalโ use of standard RDBMS, and RDBMS may not even be a good way to approach this problem.