What is the correct JNA mapping for UniChar on Mac OS X?

I have a C structure as follows:

struct HFSUniStr255 {
    UInt16 length;
    UniChar unicode[255];
};

I displayed this as expected:

public class HFSUniStr255 extends Structure
{
    public UInt16 length; // UInt16 is just an IntegerType with length 2 for convenience.

    public /*UniChar*/ char[] unicode = new char[255];
    //public /*UniChar*/ byte[] unicode = new byte[255*2];
    //public /*UniChar*/ UInt16[] unicode = new UInt16[255];

    public HFSUniStr255()
    {
    }

    public HFSUniStr255(Pointer pointer)
    {
        super(pointer);
    }
}

If I use this version, I get every second character of the string in my char [] ("aits D" for "Macintosh HD".) I assume that it has something to do with being on a 64-bit platform and JNA by matching the value with 32-bit wchar_t, but then chopping 16 bits on each wchar_t when they are copied.

If I use the version of byte [], I get data that decodes correctly using the UTF-16LE encoding.

If I use the version of UInt16 [], I get the correct code point for each character, but then it is inconvenient to convert them back to a string.

- char [], ?

+3
1

, char .

-

, :

  • JNA
  • UTF-16LE, JVM, unicode

, .

: byte[] verion


, UInt16?

0

Source: https://habr.com/ru/post/1770436/


All Articles