Despite being too close, my assumption is that this is because the first query only needs to read the 50th record to return the results, while the second query needs to read six million before returning the results. Basically, the first request just closes faster.
I would suggest that this has an incredible amount associated with the composition of the field types and table keys, etc.
If a record consists of fields of a fixed length (for example, CHAR versus VARCHAR), then the DBMS can simply calculate where the nth record begins and goes there. If its length is variable, then you will need to read the entries to determine where the nth entry begins. Similarly, I would suggest that tables that have matching primary keys will query faster than those that don't have such keys.
source share