So, as mentioned in the comment pairs containing the data in the array, itβs easier, but the solution does not scale in terms of efficiency as the size of the data set increases. You really should only use an iterator when you want to access a random object in an array, otherwise generators are the way to go. Below I created a prototype of a reader function that reads each json object individually and returns a generator.
The basic idea is to signal the reader to separate the carriage character "\ n" (or "\ r \ n" for Windows). Python can do this with the file.readline () function.
import json def json_readr(file): for line in open(file, mode="r"): yield json.loads(line)
However, this method only works when the file is written as it is, with each object separated by a new line character. Below I wrote an example of an author who separates an array of json objects and saves each in a new line.
def json_writr(file, json_objects): f = open(file, mode="w") for jsonobj in json_objects: jsonstr = json.dumps(jsonobj) f.write(jsonstr+"\n") f.flush() f.close()
You can also do the same operation with file.writelines () and list comprehension
... jsobjs = [json.dumps(j)+"\n" for j in json_objects] f.writelines(jsobjs) ...
And if you want to add data instead of writing a new file, just change 'mode =' w "'to' mode =" a ".
In the end, I find that it helps a lot not only with readability when I try to open json files in a text editor, but also in terms of memory usage more efficiently.
In this note, if you change your mind at some point and you want the list not to be read, Python allows you to place the generator function inside the list and automatically populate the list. In other words, just write
lst = list(json_readr(file))
Hope this helps. Sorry if this was a bit verbose.