Unicode guide/Implementations: Difference between revisions

From JookWiki
(→‎C and C++: Add property about wrapping)
(Add Python 3)
Line 59: Line 59:
Bonus points for the string.reverse function that will break Unicode strings.
Bonus points for the string.reverse function that will break Unicode strings.


== Python 2 ==
== Python ==
- check if this is worth mentioning
Python spent an enormous amount of time adding Unicode support in version 3.
 
*Character type: Unicode code point
* Byte strings: Yes
* Internal encoding: 8-bits, 16-bits or 32-bit depending on the string
* String encoding: Unicode code points
* Supports bytes in strings: Yes, using PEP 383
* Supports surrogates in strings: Yes
* Supports invalid code points in strings: No
* Supports normalizing strings: Yes
* Supports querying character properties: Yes
* Supports breaking by code point: Yes
* Supports breaking by extended grapheme cluster: No
* Supports breaking by text boundaries: No
* Supports encoding and decoding to other encodings: Yes
* Supports Unicode regex extensions: No
* Classifies by: Unicode database
* Collates by: Doesn't provide an API for this
* Converts case by: Unicode database
* Locale tailoring is done by: Doesn't provide an API for this
*Wraps operating system APIs with Unicode ones: Yes, with invalid bytes encoded as surrogates
 
This is better than most languages but still not most people would want.


== Rust ==
== Rust ==

Revision as of 00:26, 20 March 2022

This page is my attempt to document my research on unique Unicode implementations supported in various languages and software. This is mostly in note form to avoid things getting out of control.

I apologize for not attaching sources to all of this, I've had to dig in to the source code. My best advice here it to

C and C++

C and C++ provide limited functionality related to text handling.

  • Character type: 8-bit, 16-bit or 32-bit, encoding not defined
  • Byte strings: No, just regular arrays
  • Internal encoding: None
  • String encoding: Depends on locale
  • Supports bytes in strings: Depends on locale encoding
  • Supports surrogates in strings: Depends on locale encoding
  • Supports invalid code points in strings: Depends on locale encoding
  • Supports normalizing strings: No
  • Supports querying character properties: No
  • Supports breaking by code point: No
  • Supports breaking by extended grapheme cluster: No
  • Supports breaking by text boundaries: No
  • Supports encoding and decoding to other encodings: Yes
  • Supports Unicode regex extensions: Not applicable, no regex included
  • Classifies by: Locale information, only supports single characters
  • Collates by: Locale information, supports arbitrary strings
  • Converts case by: Locale information, only supports single characters
  • Locale tailoring is done by: Current locale
  • Wraps operating system APIs with Unicode ones: No

This could be classified as 'Unicode agnostic' however classification and case conversion is limited to single characters. As a result this is just broken even with the limited functionality it provides.

Different platforms usually provide clearer definition:

  • On POSIX, characters are usually 8-bit ASCII-compatible values
  • On Windows, characters are 16-bit UTF-16-compatible values

Lua

Lua describes itself as 'encoding-agnostic', whatever that is. It certainly handles ASCII well.

  • Character type: Byte, encoding not defined
  • Byte strings: No
  • Internal encoding: None
  • String encoding: Undefined
  • Supports bytes in strings: Depends on encoding
  • Supports surrogates in strings: Depends on encoding
  • Supports invalid code points in strings: Depends on encoding
  • Supports normalizing strings: No
  • Supports querying character properties: No
  • Supports breaking by code point: Yes if encoded in UTF-8
  • Supports breaking by extended grapheme cluster: No
  • Supports breaking by text boundaries: No
  • Supports encoding and decoding to other encodings: No
  • Supports Unicode regex extensions: Not applicable, no regex at all
  • Classifies by: C APIs, maybe by locale, only supports 8-bit characters
  • Collates by: Doesn't provide an API for this
  • Converts case by: C APIs, maybe by locale, only supports 8-bit characters
  • Locale tailoring is done by: Per-process C locale
  • Wraps operating system APIs with Unicode ones: No

Overall there's no clear path here from reading bytes to handling Unicode.

Bonus points for the string.reverse function that will break Unicode strings.

Python

Python spent an enormous amount of time adding Unicode support in version 3.

  • Character type: Unicode code point
  • Byte strings: Yes
  • Internal encoding: 8-bits, 16-bits or 32-bit depending on the string
  • String encoding: Unicode code points
  • Supports bytes in strings: Yes, using PEP 383
  • Supports surrogates in strings: Yes
  • Supports invalid code points in strings: No
  • Supports normalizing strings: Yes
  • Supports querying character properties: Yes
  • Supports breaking by code point: Yes
  • Supports breaking by extended grapheme cluster: No
  • Supports breaking by text boundaries: No
  • Supports encoding and decoding to other encodings: Yes
  • Supports Unicode regex extensions: No
  • Classifies by: Unicode database
  • Collates by: Doesn't provide an API for this
  • Converts case by: Unicode database
  • Locale tailoring is done by: Doesn't provide an API for this
  • Wraps operating system APIs with Unicode ones: Yes, with invalid bytes encoded as surrogates

This is better than most languages but still not most people would want.

Rust

Java

Swift

Go

Kotlin

Python 3

python 3

Tcl

Squirrel

Perl

Ruby

Zig

Elixir

- erlang too?

Raku

Haskell

PHP

- narrow APIs

- mbstring

JavaScript