Are there any Java examples using Mylyn WikiText?

I want to use Java to code conversion from formats supported by MyLink Wiki to HTML. I have not found useful online examples demonstrating how to use Java to encode Mylyn WikiText. I found this page in Eclipse , but uses Ant. I am requesting an example code.

+4
source share
1 answer

While you can always code Ant scripts and run them from Java , the following is the complete utility class that I just wrote, which will allow you to convert wiki text through the various main languages โ€‹โ€‹registered in the standalone WikiText deployment.

It should be noted that the easiest way to download a standalone deployment is to deploy it and read through the API to find out what other configurations you use to apply.

Convert WikiText to HTML

package com.stackoverflow.mylyn; import java.io.StringWriter; import java.util.Set; import java.util.TreeSet; import org.eclipse.mylyn.wikitext.core.parser.MarkupParser; import org.eclipse.mylyn.wikitext.core.parser.builder.HtmlDocumentBuilder; import org.eclipse.mylyn.wikitext.core.parser.markup.MarkupLanguage; import org.eclipse.mylyn.wikitext.core.util.ServiceLocator; /** * Utility to parse Wiki Text of varying languages and convert to HTML. */ public final class ParseWikiToHTMLUtility { public static final String NAME_TEXTILE = "Textile"; public static final String NAME_TRACWIKI = "TracWiki"; public static final String NAME_MEDIAWIKI = "MediaWiki"; public static final String NAME_CONFLUENCE = "Confluence"; public static final String NAME_TWIKI = "TWiki"; private ParseWikiToHTMLUtility() { /* Do not instantiate utility classes */ } public static String parseTextile(String wikiText) { return parseByLanguage(NAME_TEXTILE, wikiText); } public static String parseTracWiki(String wikiText) { return parseByLanguage(NAME_TRACWIKI, wikiText); } public static String parseMediaWiki(String wikiText) { return parseByLanguage(NAME_MEDIAWIKI, wikiText); } public static String parseConfluence(String wikiText) { return parseByLanguage(NAME_CONFLUENCE, wikiText); } public static String parseTWiki(String wikiText) { return parseByLanguage(NAME_TWIKI, wikiText); } public static String parseByLanguage(String name, String wikiText) { return parseByLanguage(ServiceLocator.getInstance().getMarkupLanguage(name), wikiText); } public static String parseByLanguage(MarkupLanguage language, String wikiText) { StringWriter writer = new StringWriter(); HtmlDocumentBuilder builder = new HtmlDocumentBuilder(writer); MarkupParser parser = new MarkupParser(language, builder); parser.parse(wikiText); return writer.toString(); } /** * MarkupLanguage API prefers we retrieve the MarkupLanguge by name from * the ServiceLocator; since there are no name constants, we obtain the names * from this method or alternately use the hard-coded names from this utility class, * which were pulled directly from a prior call to this very method. */ public static Set<String> getLanguageNames() { Set<String> languages = new TreeSet<String>(); for (MarkupLanguage s : ServiceLocator.getInstance().getAllMarkupLanguages()) { languages.add(s.getName()); } return languages; } } 

To convert from HTML, use the HtmlParser.parse() method. Submit your HTML as an InputSource and specify the target of the DocumentBuilder . DocumentBuilder include XslfoDocumentBuilder (for XSL-Fo and possible conversion to PDF or PostScript), DocBookDocumentBuilder (for Docbook format) and classes that extend AbstractMarkupDocumentBuilder (there for almost every markup: TextileDocumentBuilder ConfluenceDocumentBuilder etc.).

Other WikiText links:

+7
source

Source: https://habr.com/ru/post/1479587/


All Articles