These last 3 bytes were the straw that broke the camel. Probably trying to highlight a long line of distributions leads to an error.
Unfortunately, libpq will try to completely cache the result sets in memory before giving up managing the application. This is in addition to any memory you use in $myArray .
It was suggested to use LIMIT ... OFFSET ... to reduce the envelope of the memory; this one will work, but is inefficient , because it can unnecessarily duplicate server-side sorting efforts every time a request is reissued with a different offset (for example, to respond with LIMIT 10 OFFSET 10000 , Postgres will still have to sort the entire result set, only to return rows 10000..10010.)
Instead, use DECLARE ... CURSOR to create a server cursor , and then FETCH FORWARD x to get the next lines of x . Repeat as many times as necessary, or until rows less than x are returned. Remember to CLOSE cursor when you are done, even if / if an exception is raised.
Also, not SELECT * ; if you only need id and name , create a cursor FOR SELECT id, name (otherwise libpq will uselessly retrieve and cache columns that you never use, increasing the amount of memory and the total query time.)
Using cursors as shown above, libpq will hold no more than x lines in memory at any given time. However, make sure you also clear $myArray between FETCH es, if possible, or you can still lose memory due to $myArray .
source share