The main request form (e.g. for a person) is
[{ "type":"/people/person", "name":None, "/common/topic/alias":[], "limit":100 }]
Documentation is available at http://wiki.freebase.com/wiki/MQL_Manual
Using freebase.mqlreaditer () from the Python library http://code.google.com/p/freebase-python/ is the easiest way to get through all of these. In this case, the "limit" clause determines the fragment size used for queries, but you will get each result separately at the API level.
By the way, how do you plan to discriminate against Jack Kennedy as president, from hurler, from soccer player, from books, etc. etc. http://www.freebase.com/search?limit=30&start=0&query=jack+kennedy You might want to consider getting more information from Freebase (birth and death dates, authors of books, other types assigned, etc.) if you have enough context to be able to use it to disambiguate.
Some time has passed, it may be easier and / or more efficient to work from bulk data dumps, not the API http://wiki.freebase.com/wiki/Data_dumps
Edit - here is a working Python program, assuming that you have a list of type identifiers in a file called "types.txt":
import freebase f = file('types.txt') for t in f: t=t.strip() q = [{'type':t, 'mid':None, 'name':None, '/common/topic/alias':[], 'limit':500, }] for r in freebase.mqlreaditer(q): print '\t'.join([t,r['mid'],r['name']]+r['/common/topic/alias']) f.close()
If you make the request more complex, you probably want to lower the limit so that it does not work in timeouts, but for a simple request like this, raising the limit above the default value of 100 will make it more efficient using the query in large chunks.