How to parse a large HTML file using the Java HTMLParser library

I have some html files created when exporting Filemaker. Each file is basically a huge HTML table. I want to iterate over table rows and populate them in a database. I tried to do this using HTMLParser as follows:

String inputHTML = readFile("filemakerExport.htm","UTF-8");
Parser parser = new Parser();
parser.setInputHTML(inputHTML);
parser.setEncoding("UTF-8");    
NodeList nl = parser.parse(null); 
NodeList trs = nl.extractAllNodesThatMatch(new TagNameFilter("tr"),true);
for(int i=0;i<trs.size();i++) {
    NodeList nodes = trs.elementAt(i).getChildren();
    NodeList tds  = nodes.extractAllNodesThatMatch(new TagNameFilter("td"),true);
    // Do stuff with tds
}

The above code works with files less than 1 Mb. Unfortunately, I have a 4.8 Mbs html file and I am getting an error from memory.

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
    at org.htmlparser.lexer.Lexer.parseTag(Lexer.java:1002)
    at org.htmlparser.lexer.Lexer.nextNode(Lexer.java:369)
    at org.htmlparser.scanners.CompositeTagScanner.scan(CompositeTagScanner.java:111)
    at org.htmlparser.util.IteratorImpl.nextNode(IteratorImpl.java:92)
    at org.htmlparser.Parser.parse(Parser.java:701)
    at Tools.main(Tools.java:33)

Is there a more efficient way to solve this problem using HTMLParser (I'm completely new to the library), or do I need to use a different library or approach?

+3
source share
4 answers

JVM

512 : -Xmx512m

.

java -Xmx512M myrunclass
+5

DOM, - , XPath , DOM ( - ..).

Parser.visitAllNodesWith() Prser.parse().

+1

. , HtmlParser . JProfiler , HtmlParser html-. parser.reset() . . , .

, parser.setInputHTML(""); .

P.S. HtmlParser, :)

0

HTMLParser , . , . , html . HTMlParser , , . JSoup, , .

0

Source: https://habr.com/ru/post/1709088/


All Articles