/**
\page unicode Unicode and UTF-8 Support
This chapter explains how FLTK handles international
text via Unicode and UTF-8.
Unicode support was only recently added to FLTK and is
still incomplete. This chapter is Work in Progress, reflecting
the current state of Unicode support.
\section unicode_about About Unicode, ISO 10646 and UTF-8
The summary of Unicode, ISO 10646 and UTF-8 given below is
deliberately brief, and provides just enough information for
the rest of this chapter.
For further information, please see:
- http://www.unicode.org
- http://www.iso.org
- http://en.wikipedia.org/wiki/Unicode
- http://www.cl.cam.ac.uk/~mgk25/unicode.html
\par The Unicode Standard
The Unicode Standard was originally developed by a consortium of mainly
US computer manufacturers and developers of multi-lingual software.
It has now become a defacto standard for character encoding,
and is supported by most of the major computing companies in the world.
Before Unicode, many different systems, on different platforms,
had been developed for encoding characters for different languages,
but no single encoding could satisfy all languages.
Unicode provides access to over 100,000 characters
used in all the major languages written today,
and is independent of platform and language.
Unicode also provides higher-level concepts needed for text processing
and typographic publishing systems, such as algorithms for sorting and
comparing text, composite character and text rendering, right-to-left
and bi-directional text handling.
There are currently no plans to add this extra functionality to FLTK.
\par ISO 10646
The International Organisation for Standardization (ISO) had also
been trying to develop a single unified character set.
Although both ISO and the Unicode Consortium continue to publish
their own standards, they have agreed to coordinate their work so
that specific versions of the Unicode and ISO 10646 standards are
compatible with each other.
The international standard ISO 10646 defines the
Universal Character Set (UCS)
which contains the characters required for almost all known languages.
The standard also defines three different implementation levels specifying
how these characters can be combined.
There are currently no plans for handling the different implementation
levels or the combining characters in FLTK.
In UCS, characters have a unique numerical code and an official name,
and are usually shown using 'U+' and the code in hexadecimal,
e.g. U+0041 is the "Latin capital letter A".
The UCS characters U+0000 to U+007F correspond to US-ASCII,
and U+0000 to U+00FF correspond to ISO 8859-1 (Latin1).
The UCS also defines various methods of encoding characters as
a sequence of bytes.
UCS-2 encodes Unicode characters into two bytes,
which is wasteful if you are only dealing with ASCII or Latin1 text,
and insufficient if you need characters above U+00FFFF.
UCS-4 uses four bytes, which lets it handle higher characters,
but this is even more wasteful for ASCII or Latin1.
\par UTF-8
The Unicode standard defines various UCS Transformation Formats.
UTF-16 and UTF-32 are based on units of two and four bytes.
UTF-8 encodes all Unicode characters into variable length
sequences of bytes. Unicode characters in the 7-bit ASCII
range map to the same value and are represented as a single byte,
making the transformation to Unicode quick and easy.
All UCS characters above U+007F are encoded as a sequence of
several bytes. The top bits of the first byte are set to show
the length of the byte sequence, and subseqent bytes are
always in the range 0x80 to 8x8F. This combination provides
some level of synchronisation and error detection.
| Unicode range |
Byte sequences |
| U+00000000 - U+0000007F |
0xxxxxxx |
| U+00000080 - U+000007FF |
110xxxxx 10xxxxxx |
| U+00000800 - U+0000FFFF |
1110xxxx 10xxxxxx 10xxxxxx |
| U+00010000 - U+001FFFFF |
11110xxx 10xxxxxx 10xxxxxx 10xxxxxx |
| U+00200000 - U+03FFFFFF |
111110xx 10xxxxxx 10xxxxxx 10xxxxxx 10xxxxxx |
| U+04000000 - U+7FFFFFFF |
1111110x 10xxxxxx 10xxxxxx 10xxxxxx 10xxxxxx 10xxxxxx |
Moving from ASCII encoding to Unicode will allow all new FLTK
applications to be easily internationalized and used all
over the world. By choosing UTF-8 encoding, FLTK remains
largely source-code compatible to previous iteration of the
library.
\section unicode_in_fltk Unicode in FLTK
FLTK will be entirely converted to Unicode in UTF-8 encoding.
If a different encoding is required by the underlying operatings
system, FLTK will convert string as needed.
It is important to note that the initial implementation of
Unicode and UTF-8 in FLTK involves three important areas:
- provision of Unicode character tables and some simple related functions;
- conversion of char* variables and function parameters from single byte
per character representation to UTF-8 variable length characters;
- modifications to the display font interface to accept general
Unicode character or UCS code numbers instead of just ASCII or Latin1
characters.
The current implementation of Unicode / UTF-8 in FLTK will impose
the following limitations:
- An implementation note in the code says that all functions are
LIMITED to 24 bit Unicode values, but also says that only 16 bits
are really used under linux and win32.
- FLTK will only handle single characters, so composed characters
consisting of a base character and floating accent characters
will be treated as multiple characters;
- FLTK will only compare or sort strings on a byte by byte basis
and not on a general Unicode character basis;
- FLTK will not handle right-to-left or bi-directional text;
\todo
Verify 16/24 bit Unicode limit for different character sets?
OksiD's code appears limited to 16-bit whereas the FLTK2 code
appears to handle a wider set. What about illegal characters?
See comments in fl_utf8fromwc() and fl_utf8toUtf16().
\section unicode_fltk_calls FLTK Unicode and UTF8 functions
This section currently provides a brief overview of the functions.
For more details, consult the main text for each function via its link.
int fl_utf8locale()
\b FLTK2
\par
\p %fl_utf8locale() returns true if the "locale" seems to indicate
that UTF-8 encoding is used.
\par
It is highly recommended that your change your system so this does return
true!
int fl_utf8test(const char *src, unsigned len)
\b FLTK2
\par
\p %fl_utf8test() examines the first \p len bytes of \p src.
It returns 0 if there are any illegal UTF-8 sequences;
1 if \p src contains plain ASCII or if \p len is zero;
or 2, 3 or 4 to indicate the range of Unicode characters found.
int fl_utf_nb_char(const unsigned char *buf, int len)
\b OksiD
\par
Returns the number of UTF-8 character in the first \p len bytes of \p buf.
int fl_unichar_to_utf8_size(Fl_Unichar)
int fl_utf8bytes(unsigned ucs)
\par
Returns the number of bytes needed to encode \p ucs in UTF-8.
int fl_utf8len(char c)
\b OksiD
\par
If \p c is a valid first byte of a UTF-8 encoded character sequence,
\p %fl_utf8len() will return the number of bytes in that sequence.
It returns -1 if \p c is not a valid first byte.
unsigned int fl_nonspacing(unsigned int ucs)
\b OksiD
\par
Returns true if \p ucs is a non-spacing character.
[What are non-spacing characters?]
const char* fl_utf8back(const char *p, const char *start, const char *end)
\b FLTK2
const char* fl_utf8fwd(const char *p, const char *start, const char *end)
\b FLTK2
\par
If \p p already points to the start of a UTF-8 character sequence,
these functions will return \p p.
Otherwise \p %fl_utf8back() searches backwards from \p p
and \p %fl_utf8fwd() searches forwards from \p p,
within the \p start and \p end limits,
looking for the start of a UTF-8 character.
unsigned int fl_utf8decode(const char *p, const char *end, int *len)
\b FLTK2
int fl_utf8encode(unsigned ucs, char *buf)
\b FLTK2
\par
\p %fl_utf8decode() attempts to decode the UTF-8 character that starts
at \p p and may not extend past \p end.
It returns the Unicode value, and the length of the UTF-8 character sequence
is returned via the \p len argument.
\p %fl_utf8encode() writes the UTF-8 encoding of \p ucs into \p buf
and returns the number of bytes in the sequence.
See the main documentation for the treatment of illegal Unicode
and UTF-8 sequences.
unsigned int fl_utf8froma(char *dst, unsigned dstlen, const char *src, unsigned srclen)
\b FLTK2
unsigned int fl_utf8toa(const char *src, unsigned srclen, char *dst, unsigned dstlen)
\b FLTK2
\par
\p %fl_utf8froma() converts a character string containing single bytes
per character (i.e. ASCII or ISO-8859-1) into UTF-8.
If the \p src string contains only ASCII characters, the return value will
be the same as \p srclen.
\par
\p %fl_utf8toa() converts a string containing UTF-8 characters into
single byte characters. UTF-8 characters do not correspond to ASCII
or ISO-8859-1 characters below 0xFF are replaced with '?'.
\par
Both functions return the number of bytes that would be written, not
counting the null terminator.
\p destlen provides a means of limiting the number of bytes written,
so setting \p destlen to zero is a means of measuring how much storage
would be needed before doing the real conversion.
char* fl_utf2mbcs(const char *src)
\b OksiD
\par
converts a UTF-8 string to a local multi-byte character string.
[More info required here!]
unsigned int fl_utf8fromwc(char *dst, unsigned dstlen, const wchar_t *src, unsigned srclen)
\b FLTK2
unsigned int fl_utf8towc(const char *src, unsigned srclen, wchar_t *dst, unsigned dstlen)
\b FLTK2
unsigned int fl_utf8toUtf16(const char *src, unsigned srclen, unsigned short *dst, unsigned dstlen)
\b FLTK2
\par
These routines convert between UTF-8 and \p wchar_t or "wide character"
strings.
The difficulty lies in the fact \p sizeof(wchar_t) is 2 on Windows
and 4 on Linux and most other systems.
Therefore some "wide characters" on Windows may be represented
as "surrogate pairs" of more than one \p wchar_t.
\par
\p %fl_utf8fromwc() converts from a "wide character" string to UTF-8.
Note that \p srclen is the number of \p wchar_t elements in the source
string and on Windows and this might be larger than the number of characters.
\p dstlen specifies the maximum number of \b bytes to copy, including
the null terminator.
\par
\p %fl_utf8towc() converts a UTF-8 string into a "wide character" string.
Note that on Windows, some "wide characters" might result in "surrogate
pairs" and therefore the return value might be more than the number of
characters.
\p dstlen specifies the maximum number of \b wchar_t elements to copy,
including a zero terminating element.
[Is this all worded correctly?]
\par
\p %fl_utf8toUtf16() converts a UTF-8 string into a "wide character"
string using UTF-16 encoding to handle the "surrogate pairs" on Windows.
\p dstlen specifies the maximum number of \b wchar_t elements to copy,
including a zero terminating element.
[Is this all worded correctly?]
\par
These routines all return the number of elements that would be required
for a full conversion of the \p src string, including the zero terminator.
Therefore setting \p dstlen to zero is a way of measuring how much storage
would be needed before doing the real conversion.
unsigned int fl_utf8from_mb(char *dst, unsigned dstlen, const char *src, unsigned srclen)
\b FLTK2
unsigned int fl_utf8to_mb(const char *src, unsigned srclen, char *dst, unsigned dstlen)
\b FLTK2
\par
These functions convert between UTF-8 and the locale-specific multi-byte
encodings used on some systems for filenames, etc.
If fl_utf8locale() returns true, these functions don't do anything useful.
[Is this all worded correctly?]
int fl_tolower(unsigned int ucs)
\b OksiD
int fl_toupper(unsigned int ucs)
\b OksiD
int fl_utf_tolower(const unsigned char *str, int len, char *buf)
\b OksiD
int fl_utf_toupper(const unsigned char *str, int len, char *buf)
\b OksiD
\par
\p %fl_tolower() and \p %fl_toupper() convert a single Unicode character
from upper to lower case, and vice versa.
\p %fl_utf_tolower() and \p %fl_utf_toupper() convert a string of bytes,
some of which may be multi-byte UTF-8 encodings of Unicode characters,
from upper to lower case, and vice versa.
\par
Warning: to be safe, \p buf length must be at least \p 3*len
[for 16-bit Unicode]
int fl_utf_strcasecmp(const char *s1, const char *s2)
\b OksiD
int fl_utf_strncasecmp(const char *s1, const char *s2, int n)
\b OksiD
\par
\p %fl_utf_strcasecmp() is a UTF-8 aware string comparison function that
converts the strings to lower case Unicode as part of the comparison.
\p %flt_utf_strncasecmp() only compares the first \p n characters [bytes?]
\section unicode_system_calls FLTK Unicode versions of system calls
- int fl_access(const char* f, int mode)
\b OksiD
- int fl_chmod(const char* f, int mode)
\b OksiD
- int fl_execvp(const char* file, char* const* argv)
\b OksiD
- FILE* fl_fopen(cont char* f, const char* mode)
\b OksiD
- char* fl_getcwd(char* buf, int maxlen)
\b OksiD
- char* fl_getenv(const char* name)
\b OksiD
- char fl_make_path(const char* path) - returns char ?
\b OksiD
- void fl_make_path_for_file(const char* path)
\b OksiD
- int fl_mkdir(const char* f, int mode)
\b OksiD
- int fl_open(const char* f, int o, ...)
\b OksiD
- int fl_rename(const char* f, const char* t)
\b OksiD
- int fl_rmdir(const char* f)
\b OksiD
- int fl_stat(const char* path, struct stat* buffer)
\b OksiD
- int fl_system(const char* f)
\b OksiD
- int fl_unlink(const char* f)
\b OksiD
\par TODO:
\li more doc on unicode, add links
\li write something about filename encoding on OS X...
\li explain the fl_utf8_... commands
\li explain issues with Fl_Preferences
\li why FLTK has no Fl_String class
\par DONE:
\li initial transfer of the Ian/O'ksi'D patch
\li adapted Makefiles and IDEs for available platforms
\li hacked some Unicode keyboard entry for OS X
\par ISSUES:
\li IDEs:
- Makefile support: tested on Fedora Core 5 and OS X, but heaven knows
on which platforms this may fail
- Xcode: tested, seems to be working (but see comments below on OS X)
- VisualC (VC6): tested, test/utf8 works, but may have had some issues
during merge. Some additional work needed (imm32.lib)
- VisualStudio2005: tested, test/utf8 works, some addtl. work needed
(imm32.lib)
- VisualCNet: sorry, I have no longer access to that IDE
- Borland and other compiler: sorry, I can't update those
\li Platforms:
- you will encounter problems on all platforms!
- X11: many characters are missing, but that may be related to bad
fonts on my machine. I also could not do any keyboard tests yet.
Rendering seems to generally work ok.
- Win32: US and German keyboard worked ok, but no compositing was
tested. Rendering looks pretty good.
- OS X: redering looks good. Keyboard is completely messed up, even in
US setting (with Alt key)
- all: while merging I have seen plenty of places that are not
entirley utf8-safe, particularly Fl_Input, Fl_Text_Editor, and
Fl_Help_View. Keycodes from the keyboard conflict with Unicode
characters. Right-to-left rendered text can not be marked or edited,
and probably much more.
\htmlonly
\endhtmlonly
*/