Unicode guide: Difference between revisions

From JookWiki
(→‎Characters: Add warning)
(→‎Characters: Replace warning with last point)
Line 48: Line 48:
In addition to having a code point each character has a set of properties that provide information about the character to aid in writing Unicode algorithms. These include things like name, case, category, script, direction, numeric value, and rendering information.
In addition to having a code point each character has a set of properties that provide information about the character to aid in writing Unicode algorithms. These include things like name, case, category, script, direction, numeric value, and rendering information.


Personally I would avoid touching Unicode characters unless you have a good reason. It's almost always the wrong level of abstraction for any task that doesn't involve directly handling properties or code points. Unicode provides higher level algorithms for most tasks that operate on groups of characters, also known as text.
The last point I want to make is a warning: Characters do not correspond to some human identifiable unit of text such as a glyph, letter, phoneme, syllable, vowel or consonant. They are only useful for building higher level abstractions. General text processing should be done with groups of characters and Unicode-aware algorithms.
== Text ==
== Text ==
- levels of abstraction
- levels of abstraction

Revision as of 04:21, 4 October 2022

This is a WIP page, take nothing here as final.

If you've ever tried to learn Unicode you've most likely looked at online tutorial and learning resources. These tend to focus on specific details about how Unicode works instead of the broader picture.

This guide is my attempt to help you build a mental model of Unicode that can be used to write functional software and navigate the official Unicode standards and resources.

As a disclaimer: I'm just a random person, some of this might be wrong. But hopefully by the end of reading this you should be able to correct me.

Standards

The Unicode standard defines the following:

  • A large numeric codespace
  • A large multilingual database of characters
  • A database of character properties
  • How to encode and decode the codespace
  • How to normalize equivalent text
  • How to map text between different cases
  • How to segment text in to words, sentences, lines, and paragraphs
  • How to determine text direction

Some portions of the standard may be overridden (also known as 'tailoring') to aid in localization.

The standard is freely available online in the following pieces:

The Unicode Consortium also defines these in separate standards:

  • How to order text for sorting
  • How to incorporate Unicode in to regular expressions
  • How to handle emoji sequences
  • How to handle confusable characters and other security concerns
  • A repository of shared localization data

These are also freely available online at:

Policies for stability in these standards can be found at the Unicode Consortium Policies page.

Characters

Unicode provides two distinct definitions of the term 'character': Abstract characters and encoded characters.

Abstract characters are the units that make up textual data on a computer. These are usually some portion of a written script that has a unique identity independent of Unicode, such as a letter, symbol, accent, logogram, or spacing but they may be something else entirely. The best way to think of these are atoms used to handle text editing, displaying, organization and storage.

Encoded characters are mappings of an abstract character to the Unicode codespace as a code point. This is almost always what people mean by 'character' in Unicode discussion. There's not a 1:1 mapping between abstract and encoded characters: Abstract characters might be multiple times to aid in compatibility with other character sets, they might not be mapped at all and instead represented using a sequence of other encoded characters, or they might not be representable at all and require addition in future Unicode versions.

In addition to having a code point each character has a set of properties that provide information about the character to aid in writing Unicode algorithms. These include things like name, case, category, script, direction, numeric value, and rendering information.

The last point I want to make is a warning: Characters do not correspond to some human identifiable unit of text such as a glyph, letter, phoneme, syllable, vowel or consonant. They are only useful for building higher level abstractions. General text processing should be done with groups of characters and Unicode-aware algorithms.

Text

- levels of abstraction

- indexing

- sort

- match

- search

- normalize

- serialize

- case map

- properties

- breaking/segmentation

- reversing

TODO:

languages/locales

Non-Unicode compatibility

- preserving data

Level 1: Bytes

level 1: bytes. you can compare, search, splitting, sorting. your basic unit is the byte

filesystem/unix/C

Level 2: Code units

level 2: code units. your basic unit is the smallest unit of your unicode encoding: a byte for utf-8, a 16-bit int for UTF-16, a 32-bit int for UTF-32. you can compare, search, splitting, sort. to get to this point you have to handle endianness

windows

Level 3: Unicode scalars

level 3: unicode scalars. your basic unit is a number between 0x0 and 0x1fffff inclusive, with some ranges for surrogates not allowed. to get tho this point you have to decode utf-8, utf-16 or utf-32. you can compare, search, split, etc but it's important to note that these are just numbers. there's no meaning attached to them

python

TODO: Code points can be noncharacters or reserved characters, uh-oh

Level 4: Unicode characters

level 4: unicode characters: your basic unit is a code point that your runtime recognizes and is willing to interpret using a copy of the unicode database. results vary according to the supported unicode version. you can normalize, compare, match, search, and splitting, case map strings. locale specific operations may be provided. to get these the runtime needs to check if the characters are supported.

???

TODO: noncharacters

Level 5: Segmented text

level 5: unicode texts: your basic unit is a string of unicode characters of some amount, such as a word, paragraph, grapheme cluster. to get these you need to convert from a string of unicode characters with breaking/segmentation rules

swift/raku

Further reading

I highly recommend reading the following resources:

You might also find the following tools helpful:

While writing this page I researched and documented Unicode support in various programming languages. You can see my notes here: Unicode guide/Implementations.