I believe that the difference in performance will be undetectable to anyone but the profiler if the schema and data are the same. However, you noticed a big difference if you used the wrong XML parser. In other words, an SAX implementation can easily match or possibly outperform JSON parsing. There are many external factors that may be helpful. If you want the real story to throw both JSON and SAX parser into the same data / schema without additional logic. Big savings come from the logic used to interpret the parsing. It may be easier to use DOM or pull parsing depending on your requirements, while SAx will cause an overly complex inefficient solution. There are also noticeable differences between parsers. Add a file size variable and you will quickly lose the volume of what you are actually measuring. Another example: if your XML contains DTD descriptions and entity references that must be resolved by wire, and your network connection has a high latency, then you can see improvements in JSON. It all comes down to what you are really trying to do.
source share