From the start, java had used UTF-16 encoding for encoding the characters. Thus, in the earlier stages when the unicode character set was limited to 16 bits and hence was given full support by java character which was using the utf-16 encoding. Once the unicode was extended to support till the range U+10FFFF, the earlier UTF-16 encoded characters cannot represent characters more than U+FFFF. Hence, in J2se5, support was provided through the Character class. So, the primitive char still supports only the characters till code point: UTF+FFFF. The Java 2 platform uses the UTF-16 representation in char arrays and in the String and StringBuffer classes. In this representation, supplementary characters are represented as a pair of char values, the first from the high-surrogates range, (\uD800-\uDBFF), the second from the low-surrogates range (\uDC00-\uDFFF).
A char value, therefore, represents Basic Multilingual Plane (BMP) code points, including the surrogate code points, or code units of the UTF-16 encoding. An int value represents all Unicode code points, including supplementary code points. The lower (least significant) 21 bits of int are used to represent Unicode code points and the upper (most significant) 11 bits must be zero. Unless otherwise specified, the behavior with respect to supplementary characters and surrogate char values is as follows:
* The methods that only accept a char value cannot support supplementary characters. They treat char values from the surrogate ranges as undefined characters. For example, Character.isLetter('\uD840') returns false, even though this specific value if followed by any low-surrogate value in a string would represent a letter.
* The methods that accept an int value support all Unicode characters, including supplementary characters. For example, Character.isLetter(0x2F81A) returns true because the code point value represents a letter (a CJK ideograph).
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment