Cross Platform Unicode Support

I find that getting Unicode support in my cross-platform applications is a real pain in the butt.

I need strings that can be translated from C code to a database, to a Java application, and to a Perl module. Each of them uses different Unicode encodings (UTF8, UTF16) or another code page. The biggest thing I need is a cross-platform way to make conversions.

What tools, libraries, or methods do people use to make things easier?

+4
source share
4 answers

Take a look at this: http://www.icu-project.org/

+4
source

Perl has Encode as its standard library. It can be used to read / write any encoding you want it to be no problem.

+2
source

How do you make cross-platform calls? Is this all called Java?

http://java.sun.com/docs/books/tutorial/i18n/text/string.html may be helpful.

I am a little confused by what you are trying to do. Is a database essentially an interface between all code? Then it should be simple - just do a UTF-8 DB, and each of the clients will have to make their own conversions.

Sounds like an interesting issue, can you share more details?

0
source

Well, I think iconv is enough for your needs. Iconv must be available on any POSIX system by default (including (GNU /) Linux, * BSD, Mac OS X ...). Windows AFAIK requires a separate library, but:

  1. You can simply install it / link to your software / statically compile it. ( libiconv for Windows ). (I think I would recommend linking it).
  2. You can use some native Windows calls as a special case.

Of course, if you use Java, it has a built-in - but I see that it may not be what you want (JNI calls are expensive).

PS. You can not install perl for a specific encoding?

0
source

Source: https://habr.com/ru/post/1276516/


All Articles