How to evade 100 record limit in spluna python request

When executing a query through the Splunk SDK, apparently, the results are clipped after 100 records. How to get around this limit?

I tried:

>job = service.jobs.create(qstring,max_count=0, max_time=0, count=10000) >while not job.is_ready(): time.sleep(1) >out = list(results.ResultsReader(job.results())) >print(len(out)) 100 

but the same query in the splunk web interface produces over 100 rows of results.

+5
source share
2 answers

Here is a hack that seems to work (but this, of course, is not the right way to do this):

in splunklib.binding

HttpLib.get and HttpLib.post add the following line at the beginning of each method:

 kwargs['count'] = 100000 
+1
source

Try job.results ( count = 0 ) count = 0 means no restrictions.

+2
source

Source: https://habr.com/ru/post/1209149/


All Articles